Jan 27 14:11:37 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 14:11:37 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:37 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:11:38 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 14:11:38 crc kubenswrapper[4833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:11:38 crc kubenswrapper[4833]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 14:11:38 crc kubenswrapper[4833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:11:38 crc kubenswrapper[4833]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:11:38 crc kubenswrapper[4833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 14:11:38 crc kubenswrapper[4833]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.933894 4833 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945619 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945653 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945658 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945663 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945669 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945673 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945680 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945684 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945689 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945694 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945701 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945706 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945713 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945720 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945727 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945733 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945737 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945741 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945745 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945750 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945761 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945765 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945769 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945774 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945779 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945785 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945791 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945795 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945799 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945804 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945808 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945813 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945817 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945821 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945827 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945832 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945837 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945841 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945846 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945850 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945854 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945858 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945863 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945867 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945871 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945875 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945879 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945884 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945889 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945893 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945897 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945902 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945907 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945912 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945917 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945921 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945926 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945931 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945934 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945939 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945943 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945948 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945951 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945955 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945958 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945964 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945968 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945972 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945976 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945979 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.945984 4833 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946858 4833 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946878 4833 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946890 4833 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946896 4833 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946903 4833 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946908 4833 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946916 4833 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946922 4833 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946926 4833 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946931 4833 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946936 4833 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946941 4833 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946947 4833 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946951 4833 flags.go:64] FLAG: --cgroup-root="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946956 4833 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946960 4833 flags.go:64] FLAG: --client-ca-file="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946964 4833 flags.go:64] FLAG: --cloud-config="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946968 4833 flags.go:64] FLAG: --cloud-provider="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946972 4833 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946982 4833 flags.go:64] FLAG: --cluster-domain="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946986 4833 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946990 4833 flags.go:64] FLAG: --config-dir="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946995 4833 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.946999 4833 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947005 4833 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947010 4833 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947014 4833 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947019 4833 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947024 4833 flags.go:64] FLAG: --contention-profiling="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947028 4833 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947032 4833 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947037 4833 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947041 4833 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947047 4833 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947051 4833 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947055 4833 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947060 4833 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947068 4833 flags.go:64] FLAG: --enable-server="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947074 4833 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947080 4833 flags.go:64] FLAG: --event-burst="100" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947085 4833 flags.go:64] FLAG: --event-qps="50" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947090 4833 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947096 4833 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947104 4833 flags.go:64] FLAG: --eviction-hard="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947122 4833 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947128 4833 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947133 4833 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947139 4833 flags.go:64] FLAG: --eviction-soft="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947144 4833 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947150 4833 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947155 4833 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947161 4833 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947167 4833 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947173 4833 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947178 4833 flags.go:64] FLAG: --feature-gates="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947186 4833 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947193 4833 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947200 4833 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947207 4833 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947213 4833 flags.go:64] FLAG: --healthz-port="10248" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947218 4833 flags.go:64] FLAG: --help="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947223 4833 flags.go:64] FLAG: --hostname-override="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947229 4833 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947234 4833 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947240 4833 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947246 4833 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947251 4833 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947257 4833 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947262 4833 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947267 4833 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947272 4833 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947277 4833 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947283 4833 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947290 4833 flags.go:64] FLAG: --kube-reserved="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947296 4833 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947301 4833 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947306 4833 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947311 4833 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947316 4833 flags.go:64] FLAG: --lock-file="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947321 4833 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947326 4833 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947331 4833 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947345 4833 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947351 4833 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947356 4833 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947361 4833 flags.go:64] FLAG: --logging-format="text" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947367 4833 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947372 4833 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947377 4833 flags.go:64] FLAG: --manifest-url="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947382 4833 flags.go:64] FLAG: --manifest-url-header="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947389 4833 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947394 4833 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947401 4833 flags.go:64] FLAG: --max-pods="110" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947406 4833 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947411 4833 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947417 4833 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947422 4833 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947427 4833 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947467 4833 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947475 4833 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947493 4833 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947499 4833 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947505 4833 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947511 4833 flags.go:64] FLAG: --pod-cidr="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947517 4833 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947529 4833 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947534 4833 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947541 4833 flags.go:64] FLAG: --pods-per-core="0" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947546 4833 flags.go:64] FLAG: --port="10250" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947553 4833 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947558 4833 flags.go:64] FLAG: --provider-id="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947563 4833 flags.go:64] FLAG: --qos-reserved="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947569 4833 flags.go:64] FLAG: --read-only-port="10255" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947575 4833 flags.go:64] FLAG: --register-node="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947581 4833 flags.go:64] FLAG: --register-schedulable="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947586 4833 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947596 4833 flags.go:64] FLAG: --registry-burst="10" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947601 4833 flags.go:64] FLAG: --registry-qps="5" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947606 4833 flags.go:64] FLAG: --reserved-cpus="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947611 4833 flags.go:64] FLAG: --reserved-memory="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947619 4833 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947624 4833 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947630 4833 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947634 4833 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947641 4833 flags.go:64] FLAG: --runonce="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947646 4833 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947653 4833 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947660 4833 flags.go:64] FLAG: --seccomp-default="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947666 4833 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947673 4833 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947683 4833 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947691 4833 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947698 4833 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947704 4833 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947710 4833 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947717 4833 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947723 4833 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947729 4833 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947734 4833 flags.go:64] FLAG: --system-cgroups="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947740 4833 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947751 4833 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947757 4833 flags.go:64] FLAG: --tls-cert-file="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947762 4833 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947772 4833 flags.go:64] FLAG: --tls-min-version="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947777 4833 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947784 4833 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947790 4833 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947795 4833 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947801 4833 flags.go:64] FLAG: --v="2" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947808 4833 flags.go:64] FLAG: --version="false" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947816 4833 flags.go:64] FLAG: --vmodule="" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947822 4833 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.947826 4833 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948044 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948055 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948061 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948068 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948074 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948079 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948084 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948090 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948096 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948100 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948106 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948111 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948115 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948120 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948128 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948133 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948138 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948143 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948147 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948152 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948157 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948162 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948166 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948171 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948176 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948181 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948185 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948189 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948195 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948201 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948206 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948210 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948216 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948222 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948228 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948233 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948238 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948243 4833 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948248 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948252 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948257 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948262 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948266 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948271 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948277 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948282 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948288 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948293 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948298 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948303 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948309 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948313 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948318 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948322 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948327 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948331 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948336 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948340 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948344 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948350 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948355 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948359 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948364 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948369 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948374 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948379 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948383 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948388 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948392 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948396 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.948400 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.949399 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.964281 4833 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.964668 4833 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964889 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964914 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964923 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964933 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964943 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964952 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964960 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964971 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964983 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.964992 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965001 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965009 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965017 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965025 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965033 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965042 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965050 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965058 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965066 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965086 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965095 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965103 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965113 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965123 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965140 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965158 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965170 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965182 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965192 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965201 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965209 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965217 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965225 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965232 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965240 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965248 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965256 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965264 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965272 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965280 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965291 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965301 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965310 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965318 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965326 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965334 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965342 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965350 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965359 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965367 4833 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965375 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965386 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965397 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965407 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965416 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965476 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965487 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965497 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965506 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965514 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965522 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965530 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965538 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965545 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965553 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965561 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965568 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965576 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965584 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965591 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965599 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.965613 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965917 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965932 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965942 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965951 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965960 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965968 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965976 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965984 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.965992 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966000 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966008 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966016 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966024 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966031 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966040 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966048 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966055 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966064 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966072 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966091 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966100 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966108 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966118 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966130 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966138 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966147 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966155 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966163 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966171 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966179 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966191 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966202 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966210 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966219 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966228 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966237 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966246 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966255 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966263 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966271 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966278 4833 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966286 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966294 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966302 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966310 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966318 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966326 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966336 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966346 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966357 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966367 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966376 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966385 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966392 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966400 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966419 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966427 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966435 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966465 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966473 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966481 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966488 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966497 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966504 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966512 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966520 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966527 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966535 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966545 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966565 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:11:38 crc kubenswrapper[4833]: W0127 14:11:38.966573 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.966586 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.967725 4833 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.975427 4833 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.976530 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.978376 4833 server.go:997] "Starting client certificate rotation" Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.978433 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.978820 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-09 12:36:17.921831655 +0000 UTC Jan 27 14:11:38 crc kubenswrapper[4833]: I0127 14:11:38.979003 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.013294 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.017273 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.018608 4833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.042641 4833 log.go:25] "Validated CRI v1 runtime API" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.083050 4833 log.go:25] "Validated CRI v1 image API" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.085022 4833 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.090491 4833 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-14-06-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.090558 4833 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.105385 4833 manager.go:217] Machine: {Timestamp:2026-01-27 14:11:39.103612992 +0000 UTC m=+0.754937414 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:6c7669d1-0a53-46b1-a135-adc3df727a2e BootID:0bb61b53-6253-4e68-9a38-d0d5935c7c24 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:57:34 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:57:34 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:64:26 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9f:f5:57 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2f:51:09 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fc:4b:a1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:b6:87:8b:43:16 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:72:15:fb:09:11:49 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.105672 4833 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.105849 4833 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.109360 4833 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.109589 4833 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.109631 4833 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.109888 4833 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.109900 4833 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.111437 4833 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.111627 4833 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.112087 4833 state_mem.go:36] "Initialized new in-memory state store" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.117393 4833 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.125398 4833 kubelet.go:418] "Attempting to sync node with API server" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.125461 4833 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.125494 4833 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.125513 4833 kubelet.go:324] "Adding apiserver pod source" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.125530 4833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.129702 4833 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.131467 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.133928 4833 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.135172 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.135283 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.135184 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.135365 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137359 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137385 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137392 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137399 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137410 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137416 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137424 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137434 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137458 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137465 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137476 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137483 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.137514 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.138006 4833 server.go:1280] "Started kubelet" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.139418 4833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.139548 4833 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 14:11:39 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.144740 4833 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.146353 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.148924 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.148967 4833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.149223 4833 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.149266 4833 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.149298 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.149501 4833 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.149391 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:43:41.039953254 +0000 UTC Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.150340 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="200ms" Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.150381 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.150553 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.151243 4833 server.go:460] "Adding debug handlers to kubelet server" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.155947 4833 factory.go:55] Registering systemd factory Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.155976 4833 factory.go:221] Registration of the systemd container factory successfully Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.160497 4833 factory.go:153] Registering CRI-O factory Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.160519 4833 factory.go:221] Registration of the crio container factory successfully Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.160629 4833 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.160662 4833 factory.go:103] Registering Raw factory Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.160685 4833 manager.go:1196] Started watching for new ooms in manager Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.161866 4833 manager.go:319] Starting recovery of all containers Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.162956 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9be221f797a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:11:39.13797213 +0000 UTC m=+0.789296532,LastTimestamp:2026-01-27 14:11:39.13797213 +0000 UTC m=+0.789296532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169139 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169226 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169243 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169255 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169268 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169285 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169298 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169313 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169329 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169342 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169355 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169370 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169382 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169462 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169507 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169520 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169533 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169546 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169561 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169574 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169587 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169627 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169640 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169655 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169667 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169706 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169724 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169736 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169750 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169763 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169775 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169789 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169801 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169814 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169827 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169840 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169852 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169866 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169879 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169895 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169907 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169921 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169935 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169948 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169962 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169976 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.169994 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170008 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170023 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170041 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170064 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170077 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170095 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.170110 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172319 4833 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172352 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172371 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172386 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172403 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172418 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172431 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172465 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172483 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172498 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172511 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172524 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172540 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172555 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172570 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172583 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172598 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172614 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172628 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172641 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172653 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172666 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172681 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172693 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172708 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172722 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172735 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172750 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172764 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172778 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172793 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172807 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172822 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172839 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172852 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172868 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172884 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172897 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172917 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172931 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172944 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172958 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172971 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172983 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.172997 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173013 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173031 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173046 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173059 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173077 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173091 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173112 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173142 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173162 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173177 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173192 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173207 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173222 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173237 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173251 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173265 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173279 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173292 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173304 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173318 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173332 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173345 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173359 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173371 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173385 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173398 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173410 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173423 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173436 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173467 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173484 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173497 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173508 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173523 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173535 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173549 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173563 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173574 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173587 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173600 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173612 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173627 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173639 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173652 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173665 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173677 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173688 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173702 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173713 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173724 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173736 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173748 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173762 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173774 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173785 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173796 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173809 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173823 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173836 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173851 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173863 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173877 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173887 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173898 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173910 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173921 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173932 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173943 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173957 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173971 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173982 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.173993 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174006 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174017 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174086 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174103 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174114 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174126 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174138 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174151 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174162 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174173 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174185 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174197 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174212 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174263 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174283 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174297 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174308 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174320 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174332 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174344 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.174357 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178699 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178756 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178812 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178843 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178887 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178916 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178942 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.178983 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179009 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179048 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179070 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179093 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179121 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179188 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179239 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179275 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179298 4833 reconstruct.go:97] "Volume reconstruction finished" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.179313 4833 reconciler.go:26] "Reconciler: start to sync state" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.190801 4833 manager.go:324] Recovery completed Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.200455 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.205020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.205073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.205086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.206151 4833 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.206173 4833 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.206200 4833 state_mem.go:36] "Initialized new in-memory state store" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.207489 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.209207 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.209252 4833 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.209283 4833 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.209334 4833 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.210781 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.210840 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.236850 4833 policy_none.go:49] "None policy: Start" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.238341 4833 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.238378 4833 state_mem.go:35] "Initializing new in-memory state store" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.249663 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.294701 4833 manager.go:334] "Starting Device Plugin manager" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.294772 4833 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.294789 4833 server.go:79] "Starting device plugin registration server" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.295379 4833 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.295402 4833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.295568 4833 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.295679 4833 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.295693 4833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.307388 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.309434 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.309643 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.311722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.311773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.311789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.312000 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.312245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.312310 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.313194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.313242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.313255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.313433 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.313941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314041 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314767 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314932 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314996 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.314944 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.315158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.315223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316467 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316615 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.316668 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.317654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.317692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.317705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.317916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.317938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.317950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.318321 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.318356 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.319412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.319438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.319471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.351461 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="400ms" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380706 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380754 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380816 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380832 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380847 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380864 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380929 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380965 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380981 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.380996 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.395656 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.397156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.397191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.397204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.397227 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.397732 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482689 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482706 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482721 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482732 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482785 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482818 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482819 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482938 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482954 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482959 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482939 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482993 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.482941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.483090 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.483173 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.483212 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.483369 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.598160 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.599890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.599954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.599971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.600001 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.600822 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.642641 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.666408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.686565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.694240 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: I0127 14:11:39.698959 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.729356 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5412ff16c2c6375e174da2b3d7bdd01b77347ab3053f0ee421594a8c66b27685 WatchSource:0}: Error finding container 5412ff16c2c6375e174da2b3d7bdd01b77347ab3053f0ee421594a8c66b27685: Status 404 returned error can't find the container with id 5412ff16c2c6375e174da2b3d7bdd01b77347ab3053f0ee421594a8c66b27685 Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.731668 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ace359771de3a652f6420cf7c534e516e8d9ef2020c1a161b19d2784d0a21226 WatchSource:0}: Error finding container ace359771de3a652f6420cf7c534e516e8d9ef2020c1a161b19d2784d0a21226: Status 404 returned error can't find the container with id ace359771de3a652f6420cf7c534e516e8d9ef2020c1a161b19d2784d0a21226 Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.738736 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1a276dfaf7b6eca7fbc0d8e4e9d877385b5fd612892d15c225ca4543d3cdaff5 WatchSource:0}: Error finding container 1a276dfaf7b6eca7fbc0d8e4e9d877385b5fd612892d15c225ca4543d3cdaff5: Status 404 returned error can't find the container with id 1a276dfaf7b6eca7fbc0d8e4e9d877385b5fd612892d15c225ca4543d3cdaff5 Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.742633 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cd18611ad43a616357161eb16df0cf4729d249dfc6cc3954900a6c0dbdf3c014 WatchSource:0}: Error finding container cd18611ad43a616357161eb16df0cf4729d249dfc6cc3954900a6c0dbdf3c014: Status 404 returned error can't find the container with id cd18611ad43a616357161eb16df0cf4729d249dfc6cc3954900a6c0dbdf3c014 Jan 27 14:11:39 crc kubenswrapper[4833]: W0127 14:11:39.743298 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e98f11c83ae83537b624d6a471c1bf38cf0d35d40708645cb1498990b5dff2b2 WatchSource:0}: Error finding container e98f11c83ae83537b624d6a471c1bf38cf0d35d40708645cb1498990b5dff2b2: Status 404 returned error can't find the container with id e98f11c83ae83537b624d6a471c1bf38cf0d35d40708645cb1498990b5dff2b2 Jan 27 14:11:39 crc kubenswrapper[4833]: E0127 14:11:39.752277 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="800ms" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.001193 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.004193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.004320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.004345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.004382 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.005164 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Jan 27 14:11:40 crc kubenswrapper[4833]: W0127 14:11:40.085678 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.085800 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:40 crc kubenswrapper[4833]: W0127 14:11:40.100747 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.100911 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.147686 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.149734 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:19:06.080194082 +0000 UTC Jan 27 14:11:40 crc kubenswrapper[4833]: W0127 14:11:40.165705 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.165819 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.214759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1a276dfaf7b6eca7fbc0d8e4e9d877385b5fd612892d15c225ca4543d3cdaff5"} Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.216595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ace359771de3a652f6420cf7c534e516e8d9ef2020c1a161b19d2784d0a21226"} Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.217723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5412ff16c2c6375e174da2b3d7bdd01b77347ab3053f0ee421594a8c66b27685"} Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.219582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cd18611ad43a616357161eb16df0cf4729d249dfc6cc3954900a6c0dbdf3c014"} Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.220605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e98f11c83ae83537b624d6a471c1bf38cf0d35d40708645cb1498990b5dff2b2"} Jan 27 14:11:40 crc kubenswrapper[4833]: W0127 14:11:40.293757 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.293847 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.553069 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="1.6s" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.806326 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.808489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.808557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.808578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:40 crc kubenswrapper[4833]: I0127 14:11:40.808622 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:11:40 crc kubenswrapper[4833]: E0127 14:11:40.809432 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.061772 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:11:41 crc kubenswrapper[4833]: E0127 14:11:41.063513 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.147736 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.150022 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:07:21.430784999 +0000 UTC Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.226401 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d" exitCode=0 Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.226504 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d"} Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.226579 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.228267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.228311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.228325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.229941 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a" exitCode=0 Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.230019 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a"} Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.230085 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.232793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.232849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.232866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.234520 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2" exitCode=0 Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.234582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2"} Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.234673 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.235548 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.235815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.235879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.235905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.237557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.237588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.237603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.240924 4833 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024" exitCode=0 Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.241143 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.241760 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024"} Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.242662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.242694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.242710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.243890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b"} Jan 27 14:11:41 crc kubenswrapper[4833]: I0127 14:11:41.243943 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b"} Jan 27 14:11:41 crc kubenswrapper[4833]: W0127 14:11:41.987104 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:41 crc kubenswrapper[4833]: E0127 14:11:41.987640 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.148079 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.150221 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:55:23.808486287 +0000 UTC Jan 27 14:11:42 crc kubenswrapper[4833]: E0127 14:11:42.153879 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="3.2s" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.259814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.259890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.259912 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.259927 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.262605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.262766 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.263849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.263885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.263899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.267213 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.267279 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.267291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.267341 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.269179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.269234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.269244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.273604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.273639 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.273736 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.274968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.275003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.275016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.277576 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6" exitCode=0 Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.277642 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6"} Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.277752 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.279188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.279219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.279230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.411485 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.412798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.412837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.412848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:42 crc kubenswrapper[4833]: I0127 14:11:42.412872 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:11:42 crc kubenswrapper[4833]: E0127 14:11:42.413369 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.128:6443: connect: connection refused" node="crc" Jan 27 14:11:42 crc kubenswrapper[4833]: W0127 14:11:42.427575 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:42 crc kubenswrapper[4833]: E0127 14:11:42.427669 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:43 crc kubenswrapper[4833]: W0127 14:11:43.042946 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:43 crc kubenswrapper[4833]: E0127 14:11:43.043080 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.147651 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.150730 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:26:53.000709256 +0000 UTC Jan 27 14:11:43 crc kubenswrapper[4833]: W0127 14:11:43.170486 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.128:6443: connect: connection refused Jan 27 14:11:43 crc kubenswrapper[4833]: E0127 14:11:43.170575 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.128:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.283238 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586" exitCode=0 Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.283344 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586"} Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.283399 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.284293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.284337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.284349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.286962 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.286979 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.286975 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.287666 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.287753 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.286786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e"} Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.289031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.288935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.289007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.289142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:43 crc kubenswrapper[4833]: I0127 14:11:43.289044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.151657 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:51:49.142719295 +0000 UTC Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.292310 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.294933 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e" exitCode=255 Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.295025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e"} Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.295048 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.296063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.296095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.296104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.296741 4833 scope.go:117] "RemoveContainer" containerID="4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.307339 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8"} Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.307395 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79"} Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.307415 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112"} Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.307428 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d"} Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.392303 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.392625 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.393863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.393909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.393921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:44 crc kubenswrapper[4833]: I0127 14:11:44.633707 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.058863 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.075836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.152238 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:58:40.884484874 +0000 UTC Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.312677 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.314470 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.314773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575"} Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.314962 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.315036 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.316118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.316163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.316173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.319511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74"} Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.319545 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.319593 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.320773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.320825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.320835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.321091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.321166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.321250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.614350 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.615738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.615846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.615960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.616048 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.846653 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.846869 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.848249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.848304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:45 crc kubenswrapper[4833]: I0127 14:11:45.848317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.153192 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:00:07.038823829 +0000 UTC Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.193734 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.321957 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.322031 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.322150 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.322335 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.322943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.322994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:46 crc kubenswrapper[4833]: I0127 14:11:46.323749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.154480 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:09:38.542629016 +0000 UTC Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.325050 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.325404 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.326823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.326881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.326903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.780911 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.781645 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.783745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.783876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.783940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.934823 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.935108 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.936683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.936794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:47 crc kubenswrapper[4833]: I0127 14:11:47.936927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:48 crc kubenswrapper[4833]: I0127 14:11:48.155556 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:48:08.197993471 +0000 UTC Jan 27 14:11:49 crc kubenswrapper[4833]: I0127 14:11:49.000173 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:11:49 crc kubenswrapper[4833]: I0127 14:11:49.000880 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:49 crc kubenswrapper[4833]: I0127 14:11:49.002331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:49 crc kubenswrapper[4833]: I0127 14:11:49.002411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:49 crc kubenswrapper[4833]: I0127 14:11:49.002429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:49 crc kubenswrapper[4833]: I0127 14:11:49.156312 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:39:25.865592382 +0000 UTC Jan 27 14:11:49 crc kubenswrapper[4833]: E0127 14:11:49.307540 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.157623 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:13:14.479979391 +0000 UTC Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.935294 4833 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.935425 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.977956 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.978996 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.983482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.983537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:50 crc kubenswrapper[4833]: I0127 14:11:50.983550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.159161 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:57:45.542822765 +0000 UTC Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.765783 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.766078 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.767494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.767534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.767548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:51 crc kubenswrapper[4833]: I0127 14:11:51.770076 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:11:52 crc kubenswrapper[4833]: I0127 14:11:52.159519 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:49:40.26321966 +0000 UTC Jan 27 14:11:52 crc kubenswrapper[4833]: I0127 14:11:52.336399 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:11:52 crc kubenswrapper[4833]: I0127 14:11:52.337622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:11:52 crc kubenswrapper[4833]: I0127 14:11:52.337676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:11:52 crc kubenswrapper[4833]: I0127 14:11:52.337689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:11:53 crc kubenswrapper[4833]: I0127 14:11:53.160239 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:41:29.44455153 +0000 UTC Jan 27 14:11:54 crc kubenswrapper[4833]: I0127 14:11:54.148929 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 14:11:54 crc kubenswrapper[4833]: I0127 14:11:54.161370 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:38:57.813636263 +0000 UTC Jan 27 14:11:55 crc kubenswrapper[4833]: I0127 14:11:55.161698 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:39:54.441906692 +0000 UTC Jan 27 14:11:55 crc kubenswrapper[4833]: E0127 14:11:55.315932 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 14:11:55 crc kubenswrapper[4833]: E0127 14:11:55.356010 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 27 14:11:55 crc kubenswrapper[4833]: W0127 14:11:55.515179 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 14:11:55 crc kubenswrapper[4833]: I0127 14:11:55.515283 4833 trace.go:236] Trace[1037317379]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:11:45.514) (total time: 10000ms): Jan 27 14:11:55 crc kubenswrapper[4833]: Trace[1037317379]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (14:11:55.515) Jan 27 14:11:55 crc kubenswrapper[4833]: Trace[1037317379]: [10.000970052s] [10.000970052s] END Jan 27 14:11:55 crc kubenswrapper[4833]: E0127 14:11:55.515309 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 14:11:55 crc kubenswrapper[4833]: E0127 14:11:55.617131 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 27 14:11:55 crc kubenswrapper[4833]: I0127 14:11:55.950851 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 14:11:55 crc kubenswrapper[4833]: I0127 14:11:55.950956 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 14:11:55 crc kubenswrapper[4833]: I0127 14:11:55.961719 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 14:11:55 crc kubenswrapper[4833]: I0127 14:11:55.961794 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 14:11:56 crc kubenswrapper[4833]: I0127 14:11:56.162264 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:21:22.932633028 +0000 UTC Jan 27 14:11:56 crc kubenswrapper[4833]: I0127 14:11:56.200604 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]log ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]etcd ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/priority-and-fairness-filter ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-apiextensions-informers ok Jan 27 14:11:56 crc kubenswrapper[4833]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 27 14:11:56 crc kubenswrapper[4833]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-system-namespaces-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 27 14:11:56 crc kubenswrapper[4833]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 27 14:11:56 crc kubenswrapper[4833]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/bootstrap-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/start-kube-aggregator-informers ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-registration-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-discovery-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]autoregister-completion ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-openapi-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 27 14:11:56 crc kubenswrapper[4833]: livez check failed Jan 27 14:11:56 crc kubenswrapper[4833]: I0127 14:11:56.200695 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:11:57 crc kubenswrapper[4833]: I0127 14:11:57.162713 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:25:47.833411546 +0000 UTC Jan 27 14:11:58 crc kubenswrapper[4833]: I0127 14:11:58.163349 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:54:36.9875922 +0000 UTC Jan 27 14:11:59 crc kubenswrapper[4833]: I0127 14:11:59.164181 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:02:46.861058171 +0000 UTC Jan 27 14:11:59 crc kubenswrapper[4833]: E0127 14:11:59.307686 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.165097 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:39:03.481942807 +0000 UTC Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.935802 4833 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.935883 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.946976 4833 trace.go:236] Trace[525217444]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:11:48.554) (total time: 12392ms): Jan 27 14:12:00 crc kubenswrapper[4833]: Trace[525217444]: ---"Objects listed" error: 12392ms (14:12:00.946) Jan 27 14:12:00 crc kubenswrapper[4833]: Trace[525217444]: [12.392747567s] [12.392747567s] END Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.947008 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.948640 4833 trace.go:236] Trace[1788310001]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:11:46.632) (total time: 14315ms): Jan 27 14:12:00 crc kubenswrapper[4833]: Trace[1788310001]: ---"Objects listed" error: 14315ms (14:12:00.948) Jan 27 14:12:00 crc kubenswrapper[4833]: Trace[1788310001]: [14.31580079s] [14.31580079s] END Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.948698 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.948700 4833 trace.go:236] Trace[924272377]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:11:47.355) (total time: 13592ms): Jan 27 14:12:00 crc kubenswrapper[4833]: Trace[924272377]: ---"Objects listed" error: 13592ms (14:12:00.948) Jan 27 14:12:00 crc kubenswrapper[4833]: Trace[924272377]: [13.592944712s] [13.592944712s] END Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.948751 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.948799 4833 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 14:12:00 crc kubenswrapper[4833]: I0127 14:12:00.995622 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.039386 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.054816 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.139397 4833 apiserver.go:52] "Watching apiserver" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.142088 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.146909 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.147635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.147706 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.147717 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.147654 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.150014 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.150094 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.150118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.150323 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.150597 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.152057 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.152125 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.152218 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.155039 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.155069 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.155134 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.155136 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.155190 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.155175 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.165576 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:52:36.74206025 +0000 UTC Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.199765 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.216303 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"message\\\":\\\"W0127 14:11:42.864364 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 14:11:42.864817 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769523102 cert, and key in /tmp/serving-cert-3023129166/serving-signer.crt, /tmp/serving-cert-3023129166/serving-signer.key\\\\nI0127 14:11:43.073897 1 observer_polling.go:159] Starting file observer\\\\nW0127 14:11:43.078019 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:11:43.078249 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:11:43.082633 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3023129166/tls.crt::/tmp/serving-cert-3023129166/tls.key\\\\\\\"\\\\nF0127 14:11:43.584533 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250154 4833 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250363 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250423 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250476 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250491 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250505 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250535 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250550 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250580 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250609 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250632 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250648 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250663 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250677 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250695 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250749 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250778 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250798 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250829 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250848 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250865 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250880 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250896 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250930 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250946 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250961 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250977 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.250999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251014 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251063 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251079 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251122 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251150 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251166 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251182 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251199 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251216 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251253 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251300 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251316 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251332 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251347 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251361 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251376 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251406 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251422 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251482 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251514 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251607 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251661 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251684 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251700 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251738 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251824 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251864 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251882 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251898 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251916 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251958 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.251995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252035 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252052 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252070 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252087 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252103 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252119 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252136 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252153 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252169 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252188 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252205 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252220 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252236 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252255 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252308 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252330 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252349 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252382 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252399 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252466 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252483 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252518 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252534 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252551 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252567 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252583 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252601 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252620 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252638 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252674 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252690 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252706 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252759 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252781 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252821 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252862 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252879 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252896 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252913 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252947 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.252980 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253004 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253038 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253089 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253200 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253219 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253237 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253254 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253271 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253287 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253309 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.253326 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.265932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266133 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266224 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266252 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266387 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266426 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266434 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266570 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266711 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266804 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266890 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266934 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266963 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.266992 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267014 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267037 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267076 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267126 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267162 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267192 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267212 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267242 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267267 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267289 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.267311 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.268753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.271960 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.271983 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.272246 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.272491 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.274901 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.275653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.275757 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.275983 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.276071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.276527 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.277286 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.277532 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.277753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.278072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.278481 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.278694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.279019 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.279231 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.279516 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.285219 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.286753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.287122 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.287357 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.287700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.287953 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.288117 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.288261 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.288506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.289378 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.291004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.291438 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.291789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.296497 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.309858 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.310209 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.310477 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.310671 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.310923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.310991 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.311221 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.311228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.311375 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:12:01.811351023 +0000 UTC m=+23.462675425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.319509 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.340275 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.311429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.311631 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.311799 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.312232 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.314112 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.314317 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.314659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.315010 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.315309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.315421 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.315701 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.315688 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316013 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316220 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316252 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316306 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316507 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316711 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.316861 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317117 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317204 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317624 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317846 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.317915 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318019 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318333 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318630 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.318808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.319052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.319974 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.319968 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.320934 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321010 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321609 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321785 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321853 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.321856 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.323592 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.324003 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.324367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.323837 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.327465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.327482 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.327764 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.328107 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.328253 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.328769 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.328832 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.329013 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.329133 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.329373 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.338195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.338207 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.338630 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.339017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.329584 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.339106 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.339308 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.339898 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.339086 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.340918 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341220 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341504 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341544 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341566 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341604 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341621 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341639 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341653 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341668 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341685 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341720 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341739 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341759 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341793 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341812 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341832 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341870 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341888 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341936 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341953 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341970 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.341990 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342007 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342073 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342126 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342230 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342285 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342323 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342340 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342372 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342501 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342513 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342524 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342533 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342542 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342550 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342559 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342567 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342577 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342587 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342595 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342607 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342617 4833 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342626 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342636 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342646 4833 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342655 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342664 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342673 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342682 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342691 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342700 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342709 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342717 4833 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342726 4833 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342734 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342742 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342751 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342760 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342769 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342777 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342786 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342794 4833 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342802 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342810 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342819 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342827 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342836 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342844 4833 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342853 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342861 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342869 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342878 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342888 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342897 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342906 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342915 4833 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342926 4833 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342935 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342943 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342954 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342962 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342971 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342980 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342988 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.342997 4833 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343005 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343014 4833 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343023 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343031 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343042 4833 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343051 4833 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343059 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343068 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343076 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343084 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343093 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343101 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343110 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343118 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343126 4833 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343135 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343143 4833 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343152 4833 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343161 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343174 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343183 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343191 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343200 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343212 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343220 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343194 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343230 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343559 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343782 4833 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343870 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343913 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.343933 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344003 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344123 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344182 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344195 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344208 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344243 4833 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344274 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344306 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344360 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344374 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344385 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.344416 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.347068 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.347314 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.347463 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.347607 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.347690 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.347760 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.348073 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.348466 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.348932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350547 4833 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350618 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350651 4833 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350737 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350818 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350822 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350855 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350914 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350975 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.350992 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351025 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351058 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351085 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351112 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351155 4833 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351165 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351176 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351175 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351189 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351205 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351217 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351291 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351403 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.351418 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.354113 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.354177 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.354342 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.354598 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.355050 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.355266 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.355430 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.355741 4833 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.356130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.356405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.357341 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.357640 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.357792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.357869 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.357844 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.357895 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.357975 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.358011 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.358059 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:01.858041052 +0000 UTC m=+23.509365444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.358428 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.358855 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.358956 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.359096 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.359369 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.359844 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.359906 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.360018 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.360262 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.360415 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:01.860388818 +0000 UTC m=+23.511713400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.360884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361094 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361169 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361176 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361256 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361900 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.361925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.362143 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.362568 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.362589 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.363221 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.363391 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.366265 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.366400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.366647 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.368068 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" exitCode=255 Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.368397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575"} Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.368598 4833 scope.go:117] "RemoveContainer" containerID="4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.369252 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.369483 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.383039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.389920 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.399584 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.399679 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.401556 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.401870 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.402069 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.402134 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.403004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.403640 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.403959 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.410841 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.410888 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.410907 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.410991 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:01.910967622 +0000 UTC m=+23.562292184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.411076 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.415868 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.416327 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.416357 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.416375 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.416475 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:01.916429534 +0000 UTC m=+23.567753946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.416648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.417597 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.419700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.414434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.427188 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.427228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.427621 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.432622 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.436539 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.438591 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.438999 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.439803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.441601 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.443161 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.443578 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.447407 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.447891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.449604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.450213 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451798 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451879 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451890 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451900 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451909 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451918 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451929 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451938 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451947 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451957 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451966 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451975 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451984 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.451993 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452003 4833 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452012 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452022 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452031 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452040 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452050 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452058 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452067 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452075 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452083 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452095 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452116 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452128 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452137 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452146 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452155 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452168 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452180 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452190 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452201 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452212 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452223 4833 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452234 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452246 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452256 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452267 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452279 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452288 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452299 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452309 4833 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452319 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452330 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452342 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452352 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452361 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452370 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452380 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452389 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452400 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452411 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452423 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452435 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452465 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452475 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452487 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452498 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452510 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452522 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452533 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452543 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452553 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452564 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452573 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452581 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452589 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452597 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452606 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452615 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452624 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452633 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.452748 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.464800 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.469757 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.475342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.486589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.502912 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:01 crc kubenswrapper[4833]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 14:12:01 crc kubenswrapper[4833]: if [[ -f "/env/_master" ]]; then Jan 27 14:12:01 crc kubenswrapper[4833]: set -o allexport Jan 27 14:12:01 crc kubenswrapper[4833]: source "/env/_master" Jan 27 14:12:01 crc kubenswrapper[4833]: set +o allexport Jan 27 14:12:01 crc kubenswrapper[4833]: fi Jan 27 14:12:01 crc kubenswrapper[4833]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 27 14:12:01 crc kubenswrapper[4833]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 27 14:12:01 crc kubenswrapper[4833]: ho_enable="--enable-hybrid-overlay" Jan 27 14:12:01 crc kubenswrapper[4833]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 27 14:12:01 crc kubenswrapper[4833]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 27 14:12:01 crc kubenswrapper[4833]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 27 14:12:01 crc kubenswrapper[4833]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 14:12:01 crc kubenswrapper[4833]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 27 14:12:01 crc kubenswrapper[4833]: --webhook-host=127.0.0.1 \ Jan 27 14:12:01 crc kubenswrapper[4833]: --webhook-port=9743 \ Jan 27 14:12:01 crc kubenswrapper[4833]: ${ho_enable} \ Jan 27 14:12:01 crc kubenswrapper[4833]: --enable-interconnect \ Jan 27 14:12:01 crc kubenswrapper[4833]: --disable-approver \ Jan 27 14:12:01 crc kubenswrapper[4833]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 27 14:12:01 crc kubenswrapper[4833]: --wait-for-kubernetes-api=200s \ Jan 27 14:12:01 crc kubenswrapper[4833]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 27 14:12:01 crc kubenswrapper[4833]: --loglevel="${LOGLEVEL}" Jan 27 14:12:01 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:01 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.503994 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.512163 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:01 crc kubenswrapper[4833]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 14:12:01 crc kubenswrapper[4833]: if [[ -f "/env/_master" ]]; then Jan 27 14:12:01 crc kubenswrapper[4833]: set -o allexport Jan 27 14:12:01 crc kubenswrapper[4833]: source "/env/_master" Jan 27 14:12:01 crc kubenswrapper[4833]: set +o allexport Jan 27 14:12:01 crc kubenswrapper[4833]: fi Jan 27 14:12:01 crc kubenswrapper[4833]: Jan 27 14:12:01 crc kubenswrapper[4833]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 27 14:12:01 crc kubenswrapper[4833]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 14:12:01 crc kubenswrapper[4833]: --disable-webhook \ Jan 27 14:12:01 crc kubenswrapper[4833]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 27 14:12:01 crc kubenswrapper[4833]: --loglevel="${LOGLEVEL}" Jan 27 14:12:01 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:01 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.513971 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.514583 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:01 crc kubenswrapper[4833]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 27 14:12:01 crc kubenswrapper[4833]: set -o allexport Jan 27 14:12:01 crc kubenswrapper[4833]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 27 14:12:01 crc kubenswrapper[4833]: source /etc/kubernetes/apiserver-url.env Jan 27 14:12:01 crc kubenswrapper[4833]: else Jan 27 14:12:01 crc kubenswrapper[4833]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 27 14:12:01 crc kubenswrapper[4833]: exit 1 Jan 27 14:12:01 crc kubenswrapper[4833]: fi Jan 27 14:12:01 crc kubenswrapper[4833]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 27 14:12:01 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:01 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.515700 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.515730 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.517679 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.527252 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.556088 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.579581 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.601495 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"message\\\":\\\"W0127 14:11:42.864364 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 14:11:42.864817 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769523102 cert, and key in /tmp/serving-cert-3023129166/serving-signer.crt, /tmp/serving-cert-3023129166/serving-signer.key\\\\nI0127 14:11:43.073897 1 observer_polling.go:159] Starting file observer\\\\nW0127 14:11:43.078019 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:11:43.078249 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:11:43.082633 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3023129166/tls.crt::/tmp/serving-cert-3023129166/tls.key\\\\\\\"\\\\nF0127 14:11:43.584533 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.620952 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.632960 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.648165 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.662531 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.856317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.856520 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:12:02.856496541 +0000 UTC m=+24.507820943 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.957428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.957534 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.957555 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:01 crc kubenswrapper[4833]: I0127 14:12:01.957581 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957650 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957686 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957702 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957726 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957744 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957756 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957747 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957766 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:02.957748419 +0000 UTC m=+24.609072821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957863 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957869 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:02.957847941 +0000 UTC m=+24.609172343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957937 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:02.957914803 +0000 UTC m=+24.609239395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:01 crc kubenswrapper[4833]: E0127 14:12:01.957954 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:02.957944254 +0000 UTC m=+24.609268866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.023470 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.025259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.025308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.025321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.025404 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.033938 4833 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.034288 4833 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.040697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.040727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.040739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.040759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.040773 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.054845 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.059861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.059900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.059909 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.059926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.059936 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.070983 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.074085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.074112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.074121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.074136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.074147 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.088696 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.093553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.093607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.093618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.093640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.093653 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.108725 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.113351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.113410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.113426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.113471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.113487 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.130850 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.131045 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.132906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.132953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.132969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.132990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.133007 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.166706 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:01:07.703485492 +0000 UTC Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.235793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.235831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.235840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.235856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.235867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.343303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.343355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.343369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.343399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.343412 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.371411 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b000ea60c84b87a9c36276d6d1369cbb333cf7ba53b3ec3c09451cfc662d0d12"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.372171 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b59d593aaf05729db15db7fc1fa33539f7be276bd04843d230a35ac55a3a894f"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.372962 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.373365 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:02 crc kubenswrapper[4833]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 27 14:12:02 crc kubenswrapper[4833]: set -o allexport Jan 27 14:12:02 crc kubenswrapper[4833]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 27 14:12:02 crc kubenswrapper[4833]: source /etc/kubernetes/apiserver-url.env Jan 27 14:12:02 crc kubenswrapper[4833]: else Jan 27 14:12:02 crc kubenswrapper[4833]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 27 14:12:02 crc kubenswrapper[4833]: exit 1 Jan 27 14:12:02 crc kubenswrapper[4833]: fi Jan 27 14:12:02 crc kubenswrapper[4833]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 27 14:12:02 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:02 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.373690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6a28f55dcccd5e2e0dfb85b41436a79eea88bf9cafdb3cf1167ef08bec09c559"} Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.374056 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.374431 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.374831 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:02 crc kubenswrapper[4833]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 14:12:02 crc kubenswrapper[4833]: if [[ -f "/env/_master" ]]; then Jan 27 14:12:02 crc kubenswrapper[4833]: set -o allexport Jan 27 14:12:02 crc kubenswrapper[4833]: source "/env/_master" Jan 27 14:12:02 crc kubenswrapper[4833]: set +o allexport Jan 27 14:12:02 crc kubenswrapper[4833]: fi Jan 27 14:12:02 crc kubenswrapper[4833]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 27 14:12:02 crc kubenswrapper[4833]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 27 14:12:02 crc kubenswrapper[4833]: ho_enable="--enable-hybrid-overlay" Jan 27 14:12:02 crc kubenswrapper[4833]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 27 14:12:02 crc kubenswrapper[4833]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 27 14:12:02 crc kubenswrapper[4833]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 27 14:12:02 crc kubenswrapper[4833]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 14:12:02 crc kubenswrapper[4833]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 27 14:12:02 crc kubenswrapper[4833]: --webhook-host=127.0.0.1 \ Jan 27 14:12:02 crc kubenswrapper[4833]: --webhook-port=9743 \ Jan 27 14:12:02 crc kubenswrapper[4833]: ${ho_enable} \ Jan 27 14:12:02 crc kubenswrapper[4833]: --enable-interconnect \ Jan 27 14:12:02 crc kubenswrapper[4833]: --disable-approver \ Jan 27 14:12:02 crc kubenswrapper[4833]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 27 14:12:02 crc kubenswrapper[4833]: --wait-for-kubernetes-api=200s \ Jan 27 14:12:02 crc kubenswrapper[4833]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 27 14:12:02 crc kubenswrapper[4833]: --loglevel="${LOGLEVEL}" Jan 27 14:12:02 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:02 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.376339 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.376811 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:02 crc kubenswrapper[4833]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 27 14:12:02 crc kubenswrapper[4833]: if [[ -f "/env/_master" ]]; then Jan 27 14:12:02 crc kubenswrapper[4833]: set -o allexport Jan 27 14:12:02 crc kubenswrapper[4833]: source "/env/_master" Jan 27 14:12:02 crc kubenswrapper[4833]: set +o allexport Jan 27 14:12:02 crc kubenswrapper[4833]: fi Jan 27 14:12:02 crc kubenswrapper[4833]: Jan 27 14:12:02 crc kubenswrapper[4833]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 27 14:12:02 crc kubenswrapper[4833]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 27 14:12:02 crc kubenswrapper[4833]: --disable-webhook \ Jan 27 14:12:02 crc kubenswrapper[4833]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 27 14:12:02 crc kubenswrapper[4833]: --loglevel="${LOGLEVEL}" Jan 27 14:12:02 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:02 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.379651 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.379794 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.381044 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.398359 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.439042 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.446376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.446805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.446915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.447016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.447110 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.460077 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.471404 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.483699 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d95836dc7a3d72d86d34bb0e946598fd7c6509462a165f32b0c3e33f82db50e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"message\\\":\\\"W0127 14:11:42.864364 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 14:11:42.864817 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769523102 cert, and key in /tmp/serving-cert-3023129166/serving-signer.crt, /tmp/serving-cert-3023129166/serving-signer.key\\\\nI0127 14:11:43.073897 1 observer_polling.go:159] Starting file observer\\\\nW0127 14:11:43.078019 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:11:43.078249 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:11:43.082633 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3023129166/tls.crt::/tmp/serving-cert-3023129166/tls.key\\\\\\\"\\\\nF0127 14:11:43.584533 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.494339 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.505246 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.516326 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.528914 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.540321 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.550203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.550270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.550289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.550311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.550327 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.561340 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.575165 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.588890 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.602118 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.616703 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.637063 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.652820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.652871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.652880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.652904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.652916 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.755399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.755469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.755481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.755497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.755507 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.805108 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xjvwp"] Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.805611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.806095 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-npb46"] Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.806610 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.807822 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.808555 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.808929 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.809723 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-k84ff"] Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.810514 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.810978 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.811863 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.812112 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.812497 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.812531 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.812648 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.812500 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.813197 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mcx7z"] Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.818576 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.822377 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.822742 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.822870 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.823009 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.824217 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.833476 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.843382 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.855144 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.857758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.857815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.857832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.857855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.857869 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.864791 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.865040 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:12:04.865012953 +0000 UTC m=+26.516337355 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.867364 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.880423 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.893632 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.905655 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.917468 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.926081 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.938997 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.951710 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.959860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.959901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.959911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.959927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.959938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:02Z","lastTransitionTime":"2026-01-27T14:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.964363 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965599 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-cni-bin\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965650 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-cni-multus\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-rootfs\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-netns\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965750 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f348cf7e-0a0b-400a-af50-1e342385c42d-cni-binary-copy\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965804 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965833 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-proxy-tls\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-os-release\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965895 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-cnibin\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965958 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s95v\" (UniqueName: \"kubernetes.io/projected/f348cf7e-0a0b-400a-af50-1e342385c42d-kube-api-access-5s95v\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.965985 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-conf-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966036 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966054 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966059 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f348cf7e-0a0b-400a-af50-1e342385c42d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966115 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-mcd-auth-proxy-config\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966079 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-socket-dir-parent\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966155 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966054 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2d4d\" (UniqueName: \"kubernetes.io/projected/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-kube-api-access-l2d4d\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966191 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966198 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966239 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:04.96622697 +0000 UTC m=+26.617551372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966132 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966299 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:04.966285761 +0000 UTC m=+26.617610363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.966354 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:04.966312172 +0000 UTC m=+26.617636754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-k8s-cni-cncf-io\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966509 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-multus-certs\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966534 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvv8l\" (UniqueName: \"kubernetes.io/projected/fa8c488b-eed2-4666-a5c3-6aa129655eee-kube-api-access-gvv8l\") pod \"node-resolver-xjvwp\" (UID: \"fa8c488b-eed2-4666-a5c3-6aa129655eee\") " pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966560 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-cnibin\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966591 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-daemon-config\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966665 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-system-cni-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7a7c135-ca95-4e75-b823-d1e45101a761-cni-binary-copy\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-hostroot\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-etc-kubernetes\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966797 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-os-release\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktxr6\" (UniqueName: \"kubernetes.io/projected/b7a7c135-ca95-4e75-b823-d1e45101a761-kube-api-access-ktxr6\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966868 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa8c488b-eed2-4666-a5c3-6aa129655eee-hosts-file\") pod \"node-resolver-xjvwp\" (UID: \"fa8c488b-eed2-4666-a5c3-6aa129655eee\") " pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966901 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966927 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-cni-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966955 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-system-cni-dir\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.966986 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-kubelet\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.967021 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: E0127 14:12:02.967085 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:04.967075941 +0000 UTC m=+26.618400343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.977161 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:02 crc kubenswrapper[4833]: I0127 14:12:02.988641 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.005881 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.015887 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.027013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.036021 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.048569 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.061397 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.062587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.062635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.062648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.062667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.062679 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067706 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-multus-certs\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067753 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-cnibin\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-daemon-config\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvv8l\" (UniqueName: \"kubernetes.io/projected/fa8c488b-eed2-4666-a5c3-6aa129655eee-kube-api-access-gvv8l\") pod \"node-resolver-xjvwp\" (UID: \"fa8c488b-eed2-4666-a5c3-6aa129655eee\") " pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067823 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-os-release\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-system-cni-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7a7c135-ca95-4e75-b823-d1e45101a761-cni-binary-copy\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067888 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-hostroot\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-etc-kubernetes\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067951 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktxr6\" (UniqueName: \"kubernetes.io/projected/b7a7c135-ca95-4e75-b823-d1e45101a761-kube-api-access-ktxr6\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067979 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-multus-certs\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068024 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-cnibin\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068037 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-hostroot\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068072 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-cni-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.067987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-cni-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068110 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa8c488b-eed2-4666-a5c3-6aa129655eee-hosts-file\") pod \"node-resolver-xjvwp\" (UID: \"fa8c488b-eed2-4666-a5c3-6aa129655eee\") " pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068132 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-kubelet\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-system-cni-dir\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-cni-bin\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068203 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-cni-multus\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-rootfs\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068264 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-netns\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068286 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f348cf7e-0a0b-400a-af50-1e342385c42d-cni-binary-copy\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-proxy-tls\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-os-release\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-cnibin\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s95v\" (UniqueName: \"kubernetes.io/projected/f348cf7e-0a0b-400a-af50-1e342385c42d-kube-api-access-5s95v\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-conf-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068471 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f348cf7e-0a0b-400a-af50-1e342385c42d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-mcd-auth-proxy-config\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068560 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-os-release\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068561 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-etc-kubernetes\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-system-cni-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-cni-bin\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068825 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fa8c488b-eed2-4666-a5c3-6aa129655eee-hosts-file\") pod \"node-resolver-xjvwp\" (UID: \"fa8c488b-eed2-4666-a5c3-6aa129655eee\") " pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-socket-dir-parent\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068857 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-kubelet\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068855 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7a7c135-ca95-4e75-b823-d1e45101a761-cni-binary-copy\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068868 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-system-cni-dir\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2d4d\" (UniqueName: \"kubernetes.io/projected/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-kube-api-access-l2d4d\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-k8s-cni-cncf-io\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068939 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-rootfs\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.068939 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-var-lib-cni-multus\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069048 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-daemon-config\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-netns\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069146 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-host-run-k8s-cni-cncf-io\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-socket-dir-parent\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-os-release\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069218 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f348cf7e-0a0b-400a-af50-1e342385c42d-cnibin\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069243 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7a7c135-ca95-4e75-b823-d1e45101a761-multus-conf-dir\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069629 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f348cf7e-0a0b-400a-af50-1e342385c42d-cni-binary-copy\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.069841 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f348cf7e-0a0b-400a-af50-1e342385c42d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.070026 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-mcd-auth-proxy-config\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.072023 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.072104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-proxy-tls\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.086240 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktxr6\" (UniqueName: \"kubernetes.io/projected/b7a7c135-ca95-4e75-b823-d1e45101a761-kube-api-access-ktxr6\") pod \"multus-npb46\" (UID: \"b7a7c135-ca95-4e75-b823-d1e45101a761\") " pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.086408 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2d4d\" (UniqueName: \"kubernetes.io/projected/cd82cea5-8cab-4c03-b640-2b4d45ba7e53-kube-api-access-l2d4d\") pod \"machine-config-daemon-mcx7z\" (UID: \"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\") " pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.086662 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvv8l\" (UniqueName: \"kubernetes.io/projected/fa8c488b-eed2-4666-a5c3-6aa129655eee-kube-api-access-gvv8l\") pod \"node-resolver-xjvwp\" (UID: \"fa8c488b-eed2-4666-a5c3-6aa129655eee\") " pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.088465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s95v\" (UniqueName: \"kubernetes.io/projected/f348cf7e-0a0b-400a-af50-1e342385c42d-kube-api-access-5s95v\") pod \"multus-additional-cni-plugins-k84ff\" (UID: \"f348cf7e-0a0b-400a-af50-1e342385c42d\") " pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.122277 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xjvwp" Jan 27 14:12:03 crc kubenswrapper[4833]: W0127 14:12:03.133110 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa8c488b_eed2_4666_a5c3_6aa129655eee.slice/crio-376b49a481cdc180d4d62d69a8d4a45b530266ea36dbc392de6d4f4a967c2189 WatchSource:0}: Error finding container 376b49a481cdc180d4d62d69a8d4a45b530266ea36dbc392de6d4f4a967c2189: Status 404 returned error can't find the container with id 376b49a481cdc180d4d62d69a8d4a45b530266ea36dbc392de6d4f4a967c2189 Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.133214 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-npb46" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.138501 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:03 crc kubenswrapper[4833]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 27 14:12:03 crc kubenswrapper[4833]: set -uo pipefail Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 27 14:12:03 crc kubenswrapper[4833]: HOSTS_FILE="/etc/hosts" Jan 27 14:12:03 crc kubenswrapper[4833]: TEMP_FILE="/etc/hosts.tmp" Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # Make a temporary file with the old hosts file's attributes. Jan 27 14:12:03 crc kubenswrapper[4833]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 27 14:12:03 crc kubenswrapper[4833]: echo "Failed to preserve hosts file. Exiting." Jan 27 14:12:03 crc kubenswrapper[4833]: exit 1 Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: while true; do Jan 27 14:12:03 crc kubenswrapper[4833]: declare -A svc_ips Jan 27 14:12:03 crc kubenswrapper[4833]: for svc in "${services[@]}"; do Jan 27 14:12:03 crc kubenswrapper[4833]: # Fetch service IP from cluster dns if present. We make several tries Jan 27 14:12:03 crc kubenswrapper[4833]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 27 14:12:03 crc kubenswrapper[4833]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 27 14:12:03 crc kubenswrapper[4833]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 27 14:12:03 crc kubenswrapper[4833]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 14:12:03 crc kubenswrapper[4833]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 14:12:03 crc kubenswrapper[4833]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 14:12:03 crc kubenswrapper[4833]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 27 14:12:03 crc kubenswrapper[4833]: for i in ${!cmds[*]} Jan 27 14:12:03 crc kubenswrapper[4833]: do Jan 27 14:12:03 crc kubenswrapper[4833]: ips=($(eval "${cmds[i]}")) Jan 27 14:12:03 crc kubenswrapper[4833]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 27 14:12:03 crc kubenswrapper[4833]: svc_ips["${svc}"]="${ips[@]}" Jan 27 14:12:03 crc kubenswrapper[4833]: break Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # Update /etc/hosts only if we get valid service IPs Jan 27 14:12:03 crc kubenswrapper[4833]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 27 14:12:03 crc kubenswrapper[4833]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 27 14:12:03 crc kubenswrapper[4833]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 27 14:12:03 crc kubenswrapper[4833]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 27 14:12:03 crc kubenswrapper[4833]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 27 14:12:03 crc kubenswrapper[4833]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 27 14:12:03 crc kubenswrapper[4833]: sleep 60 & wait Jan 27 14:12:03 crc kubenswrapper[4833]: continue Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # Append resolver entries for services Jan 27 14:12:03 crc kubenswrapper[4833]: rc=0 Jan 27 14:12:03 crc kubenswrapper[4833]: for svc in "${!svc_ips[@]}"; do Jan 27 14:12:03 crc kubenswrapper[4833]: for ip in ${svc_ips[${svc}]}; do Jan 27 14:12:03 crc kubenswrapper[4833]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: if [[ $rc -ne 0 ]]; then Jan 27 14:12:03 crc kubenswrapper[4833]: sleep 60 & wait Jan 27 14:12:03 crc kubenswrapper[4833]: continue Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 27 14:12:03 crc kubenswrapper[4833]: # Replace /etc/hosts with our modified version if needed Jan 27 14:12:03 crc kubenswrapper[4833]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 27 14:12:03 crc kubenswrapper[4833]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: sleep 60 & wait Jan 27 14:12:03 crc kubenswrapper[4833]: unset svc_ips Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvv8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xjvwp_openshift-dns(fa8c488b-eed2-4666-a5c3-6aa129655eee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:03 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.139728 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xjvwp" podUID="fa8c488b-eed2-4666-a5c3-6aa129655eee" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.141994 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-k84ff" Jan 27 14:12:03 crc kubenswrapper[4833]: W0127 14:12:03.145390 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7a7c135_ca95_4e75_b823_d1e45101a761.slice/crio-d5d557f6fae3dcc017be021f0484c12f182122883c90c04ea36bde80847ea7ae WatchSource:0}: Error finding container d5d557f6fae3dcc017be021f0484c12f182122883c90c04ea36bde80847ea7ae: Status 404 returned error can't find the container with id d5d557f6fae3dcc017be021f0484c12f182122883c90c04ea36bde80847ea7ae Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.148776 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:03 crc kubenswrapper[4833]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 27 14:12:03 crc kubenswrapper[4833]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 27 14:12:03 crc kubenswrapper[4833]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktxr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-npb46_openshift-multus(b7a7c135-ca95-4e75-b823-d1e45101a761): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:03 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.149293 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.150252 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-npb46" podUID="b7a7c135-ca95-4e75-b823-d1e45101a761" Jan 27 14:12:03 crc kubenswrapper[4833]: W0127 14:12:03.157554 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf348cf7e_0a0b_400a_af50_1e342385c42d.slice/crio-4257fba73e8870ec74371fa5a66413d37eb1cb77ddc2a71c985047a18a416db7 WatchSource:0}: Error finding container 4257fba73e8870ec74371fa5a66413d37eb1cb77ddc2a71c985047a18a416db7: Status 404 returned error can't find the container with id 4257fba73e8870ec74371fa5a66413d37eb1cb77ddc2a71c985047a18a416db7 Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.159881 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s95v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-k84ff_openshift-multus(f348cf7e-0a0b-400a-af50-1e342385c42d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.161104 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-k84ff" podUID="f348cf7e-0a0b-400a-af50-1e342385c42d" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.165399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.165661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.165709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.165734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.165747 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: W0127 14:12:03.167491 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd82cea5_8cab_4c03_b640_2b4d45ba7e53.slice/crio-62b641f59681ffb95ee59ed87273e429a5f9cb556af51bb4f693990a081ee2af WatchSource:0}: Error finding container 62b641f59681ffb95ee59ed87273e429a5f9cb556af51bb4f693990a081ee2af: Status 404 returned error can't find the container with id 62b641f59681ffb95ee59ed87273e429a5f9cb556af51bb4f693990a081ee2af Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.167485 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:24:00.318273684 +0000 UTC Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.168925 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2d4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.169657 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jpt5h"] Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.170665 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173206 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173206 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173319 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173378 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173209 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173505 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.173632 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.185333 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2d4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.186558 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.187711 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.207276 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.209934 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.210051 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.210308 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.210348 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.210501 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.210622 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.214418 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.215080 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.216433 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.217200 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.218214 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.218474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.218884 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.219539 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.220527 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.221232 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.222258 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.222842 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.223985 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.224582 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.225192 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.226255 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.226916 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.227964 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.228433 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.229120 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.230127 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.230370 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.230944 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.232033 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.232601 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.233795 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.234267 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.234957 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.236047 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.236647 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.237764 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.238322 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.239413 4833 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.239628 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.241413 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.242436 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.242983 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.245054 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.246036 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.247151 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.247973 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.247924 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.249657 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.250206 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.251285 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.251960 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.253000 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.253545 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.254579 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.255176 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.256310 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.256921 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.258227 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.258874 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.260276 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.260499 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.261078 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.261579 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.267900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.267954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.267967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.267988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.267999 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270223 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-ovn\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270336 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-netns\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovn-node-metrics-cert\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270404 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-systemd-units\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270427 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-log-socket\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-etc-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270491 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270516 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-env-overrides\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270537 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-var-lib-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-bin\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270570 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-kubelet\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270584 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-systemd\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270604 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-config\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270625 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-slash\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270639 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-netd\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-node-log\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270669 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fpdn\" (UniqueName: \"kubernetes.io/projected/696d56dd-3ce4-489e-a258-677cf1fd8f9b-kube-api-access-6fpdn\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.270687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-script-lib\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.271073 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.280372 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.292638 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.304945 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.316104 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.326774 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.327590 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.337603 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.338548 4833 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.354138 4833 csr.go:261] certificate signing request csr-qdw7v is approved, waiting to be issued Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.369043 4833 csr.go:257] certificate signing request csr-qdw7v is issued Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-var-lib-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371185 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-bin\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-systemd\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-config\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-kubelet\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-slash\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371292 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-systemd\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-netd\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371320 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-node-log\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371341 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-var-lib-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fpdn\" (UniqueName: \"kubernetes.io/projected/696d56dd-3ce4-489e-a258-677cf1fd8f9b-kube-api-access-6fpdn\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371392 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-script-lib\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371416 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371496 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-netns\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-ovn\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371548 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovn-node-metrics-cert\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-systemd-units\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371590 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-log-socket\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-etc-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371639 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371659 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-env-overrides\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.371231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372106 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372432 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-kubelet\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372509 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-slash\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-bin\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-node-log\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372538 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-netd\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372744 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-env-overrides\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372837 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-config\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372845 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372872 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-log-socket\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372903 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-etc-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372906 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-systemd-units\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372932 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-openvswitch\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372943 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-netns\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.372978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-ovn\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.373172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-script-lib\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.377905 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovn-node-metrics-cert\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.381274 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"62b641f59681ffb95ee59ed87273e429a5f9cb556af51bb4f693990a081ee2af"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.382048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerStarted","Data":"4257fba73e8870ec74371fa5a66413d37eb1cb77ddc2a71c985047a18a416db7"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.383458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerStarted","Data":"d5d557f6fae3dcc017be021f0484c12f182122883c90c04ea36bde80847ea7ae"} Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.383867 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2d4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.383892 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s95v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-k84ff_openshift-multus(f348cf7e-0a0b-400a-af50-1e342385c42d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.384552 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:03 crc kubenswrapper[4833]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 27 14:12:03 crc kubenswrapper[4833]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 27 14:12:03 crc kubenswrapper[4833]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktxr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-npb46_openshift-multus(b7a7c135-ca95-4e75-b823-d1e45101a761): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:03 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.384667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xjvwp" event={"ID":"fa8c488b-eed2-4666-a5c3-6aa129655eee","Type":"ContainerStarted","Data":"376b49a481cdc180d4d62d69a8d4a45b530266ea36dbc392de6d4f4a967c2189"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.384869 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.384990 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.385170 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-k84ff" podUID="f348cf7e-0a0b-400a-af50-1e342385c42d" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.385635 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:03 crc kubenswrapper[4833]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 27 14:12:03 crc kubenswrapper[4833]: set -uo pipefail Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 27 14:12:03 crc kubenswrapper[4833]: HOSTS_FILE="/etc/hosts" Jan 27 14:12:03 crc kubenswrapper[4833]: TEMP_FILE="/etc/hosts.tmp" Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # Make a temporary file with the old hosts file's attributes. Jan 27 14:12:03 crc kubenswrapper[4833]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 27 14:12:03 crc kubenswrapper[4833]: echo "Failed to preserve hosts file. Exiting." Jan 27 14:12:03 crc kubenswrapper[4833]: exit 1 Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: while true; do Jan 27 14:12:03 crc kubenswrapper[4833]: declare -A svc_ips Jan 27 14:12:03 crc kubenswrapper[4833]: for svc in "${services[@]}"; do Jan 27 14:12:03 crc kubenswrapper[4833]: # Fetch service IP from cluster dns if present. We make several tries Jan 27 14:12:03 crc kubenswrapper[4833]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 27 14:12:03 crc kubenswrapper[4833]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 27 14:12:03 crc kubenswrapper[4833]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 27 14:12:03 crc kubenswrapper[4833]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 14:12:03 crc kubenswrapper[4833]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 14:12:03 crc kubenswrapper[4833]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 27 14:12:03 crc kubenswrapper[4833]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 27 14:12:03 crc kubenswrapper[4833]: for i in ${!cmds[*]} Jan 27 14:12:03 crc kubenswrapper[4833]: do Jan 27 14:12:03 crc kubenswrapper[4833]: ips=($(eval "${cmds[i]}")) Jan 27 14:12:03 crc kubenswrapper[4833]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 27 14:12:03 crc kubenswrapper[4833]: svc_ips["${svc}"]="${ips[@]}" Jan 27 14:12:03 crc kubenswrapper[4833]: break Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # Update /etc/hosts only if we get valid service IPs Jan 27 14:12:03 crc kubenswrapper[4833]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 27 14:12:03 crc kubenswrapper[4833]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 27 14:12:03 crc kubenswrapper[4833]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 27 14:12:03 crc kubenswrapper[4833]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 27 14:12:03 crc kubenswrapper[4833]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 27 14:12:03 crc kubenswrapper[4833]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 27 14:12:03 crc kubenswrapper[4833]: sleep 60 & wait Jan 27 14:12:03 crc kubenswrapper[4833]: continue Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # Append resolver entries for services Jan 27 14:12:03 crc kubenswrapper[4833]: rc=0 Jan 27 14:12:03 crc kubenswrapper[4833]: for svc in "${!svc_ips[@]}"; do Jan 27 14:12:03 crc kubenswrapper[4833]: for ip in ${svc_ips[${svc}]}; do Jan 27 14:12:03 crc kubenswrapper[4833]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: if [[ $rc -ne 0 ]]; then Jan 27 14:12:03 crc kubenswrapper[4833]: sleep 60 & wait Jan 27 14:12:03 crc kubenswrapper[4833]: continue Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: Jan 27 14:12:03 crc kubenswrapper[4833]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 27 14:12:03 crc kubenswrapper[4833]: # Replace /etc/hosts with our modified version if needed Jan 27 14:12:03 crc kubenswrapper[4833]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 27 14:12:03 crc kubenswrapper[4833]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 27 14:12:03 crc kubenswrapper[4833]: fi Jan 27 14:12:03 crc kubenswrapper[4833]: sleep 60 & wait Jan 27 14:12:03 crc kubenswrapper[4833]: unset svc_ips Jan 27 14:12:03 crc kubenswrapper[4833]: done Jan 27 14:12:03 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvv8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-xjvwp_openshift-dns(fa8c488b-eed2-4666-a5c3-6aa129655eee): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:03 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.385734 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2d4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.385787 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-npb46" podUID="b7a7c135-ca95-4e75-b823-d1e45101a761" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.387028 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.387058 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-xjvwp" podUID="fa8c488b-eed2-4666-a5c3-6aa129655eee" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.388321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fpdn\" (UniqueName: \"kubernetes.io/projected/696d56dd-3ce4-489e-a258-677cf1fd8f9b-kube-api-access-6fpdn\") pod \"ovnkube-node-jpt5h\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.400176 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.410638 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.419741 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.427590 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.439884 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.452460 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.461907 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.472542 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.474313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.474481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.474681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.474777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.474901 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.481658 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.490730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.497436 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.506878 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.513519 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:03 crc kubenswrapper[4833]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 27 14:12:03 crc kubenswrapper[4833]: apiVersion: v1 Jan 27 14:12:03 crc kubenswrapper[4833]: clusters: Jan 27 14:12:03 crc kubenswrapper[4833]: - cluster: Jan 27 14:12:03 crc kubenswrapper[4833]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 27 14:12:03 crc kubenswrapper[4833]: server: https://api-int.crc.testing:6443 Jan 27 14:12:03 crc kubenswrapper[4833]: name: default-cluster Jan 27 14:12:03 crc kubenswrapper[4833]: contexts: Jan 27 14:12:03 crc kubenswrapper[4833]: - context: Jan 27 14:12:03 crc kubenswrapper[4833]: cluster: default-cluster Jan 27 14:12:03 crc kubenswrapper[4833]: namespace: default Jan 27 14:12:03 crc kubenswrapper[4833]: user: default-auth Jan 27 14:12:03 crc kubenswrapper[4833]: name: default-context Jan 27 14:12:03 crc kubenswrapper[4833]: current-context: default-context Jan 27 14:12:03 crc kubenswrapper[4833]: kind: Config Jan 27 14:12:03 crc kubenswrapper[4833]: preferences: {} Jan 27 14:12:03 crc kubenswrapper[4833]: users: Jan 27 14:12:03 crc kubenswrapper[4833]: - name: default-auth Jan 27 14:12:03 crc kubenswrapper[4833]: user: Jan 27 14:12:03 crc kubenswrapper[4833]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 27 14:12:03 crc kubenswrapper[4833]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 27 14:12:03 crc kubenswrapper[4833]: EOF Jan 27 14:12:03 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fpdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:03 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:03 crc kubenswrapper[4833]: E0127 14:12:03.514669 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.517652 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.530090 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.540070 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.551680 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.561129 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.577142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.577184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.577196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.577214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.577229 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.600518 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.641155 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.679518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.679569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.679584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.679609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.679623 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.686657 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.738555 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.762596 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.782156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.782434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.782530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.782604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.782672 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.799943 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.841761 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.880985 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.885742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.885790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.885803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.885824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.885838 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.919468 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.958439 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.988802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.988841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.988850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.988866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:03 crc kubenswrapper[4833]: I0127 14:12:03.988876 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:03Z","lastTransitionTime":"2026-01-27T14:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.091428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.091493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.091505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.091528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.091540 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.167901 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:41:32.484631889 +0000 UTC Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.193979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.194020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.194029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.194044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.194054 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.296565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.296602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.296611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.296625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.296635 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.370075 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 14:07:03 +0000 UTC, rotation deadline is 2026-11-02 08:22:41.031904201 +0000 UTC Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.370128 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6690h10m36.661779312s for next certificate rotation Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.387254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"49293bd2774c9d26b84bb88b4ee3d3c1fe4159153f9ca18252bf677735f098fe"} Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.389056 4833 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 14:12:04 crc kubenswrapper[4833]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 27 14:12:04 crc kubenswrapper[4833]: apiVersion: v1 Jan 27 14:12:04 crc kubenswrapper[4833]: clusters: Jan 27 14:12:04 crc kubenswrapper[4833]: - cluster: Jan 27 14:12:04 crc kubenswrapper[4833]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 27 14:12:04 crc kubenswrapper[4833]: server: https://api-int.crc.testing:6443 Jan 27 14:12:04 crc kubenswrapper[4833]: name: default-cluster Jan 27 14:12:04 crc kubenswrapper[4833]: contexts: Jan 27 14:12:04 crc kubenswrapper[4833]: - context: Jan 27 14:12:04 crc kubenswrapper[4833]: cluster: default-cluster Jan 27 14:12:04 crc kubenswrapper[4833]: namespace: default Jan 27 14:12:04 crc kubenswrapper[4833]: user: default-auth Jan 27 14:12:04 crc kubenswrapper[4833]: name: default-context Jan 27 14:12:04 crc kubenswrapper[4833]: current-context: default-context Jan 27 14:12:04 crc kubenswrapper[4833]: kind: Config Jan 27 14:12:04 crc kubenswrapper[4833]: preferences: {} Jan 27 14:12:04 crc kubenswrapper[4833]: users: Jan 27 14:12:04 crc kubenswrapper[4833]: - name: default-auth Jan 27 14:12:04 crc kubenswrapper[4833]: user: Jan 27 14:12:04 crc kubenswrapper[4833]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 27 14:12:04 crc kubenswrapper[4833]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 27 14:12:04 crc kubenswrapper[4833]: EOF Jan 27 14:12:04 crc kubenswrapper[4833]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fpdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 14:12:04 crc kubenswrapper[4833]: > logger="UnhandledError" Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.390780 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.398428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.398568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.398592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.398627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.398640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.401776 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.411889 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.420300 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.431696 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.442732 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.452944 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.463142 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.475253 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.489683 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.501222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.501261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.501273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.501290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.501301 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.509369 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.520468 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.531953 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.551386 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.604177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.604221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.604232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.604249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.604258 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.706301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.706802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.706816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.706840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.706852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.809899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.809946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.809956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.809974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.809984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.885487 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.885681 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:12:08.885648025 +0000 UTC m=+30.536972427 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.912402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.912479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.912497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.912756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.912784 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:04Z","lastTransitionTime":"2026-01-27T14:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.986389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.986476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.986509 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986510 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: I0127 14:12:04.986540 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986619 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:08.986572475 +0000 UTC m=+30.637896877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986656 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986673 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986692 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986705 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986705 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:08.986693118 +0000 UTC m=+30.638017520 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986748 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:08.986737919 +0000 UTC m=+30.638062321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986887 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986930 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.986944 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:04 crc kubenswrapper[4833]: E0127 14:12:04.987022 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:08.987003126 +0000 UTC m=+30.638327528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.015033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.015079 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.015089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.015105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.015114 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.117566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.117620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.117632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.117650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.117660 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.168177 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:51:18.282621349 +0000 UTC Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.210003 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.210145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:05 crc kubenswrapper[4833]: E0127 14:12:05.210201 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.210255 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:05 crc kubenswrapper[4833]: E0127 14:12:05.210299 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:05 crc kubenswrapper[4833]: E0127 14:12:05.210413 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.219715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.219774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.219787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.219806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.219820 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.322615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.322664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.322678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.322702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.322716 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.425521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.425559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.425569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.425586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.425597 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.528863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.528920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.528930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.528950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.528973 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.631186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.631422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.631435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.631619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.631631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.734504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.734549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.734562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.734584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.734596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.837215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.837305 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.837317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.837338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.837353 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.940263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.940313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.940323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.940340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:05 crc kubenswrapper[4833]: I0127 14:12:05.940352 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:05Z","lastTransitionTime":"2026-01-27T14:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.042736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.042785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.042794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.042858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.042937 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.145097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.145159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.145170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.145185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.145197 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.168546 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:55:55.794955523 +0000 UTC Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.219038 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.248235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.248303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.248324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.248352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.248366 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.351515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.351572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.351585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.351604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.351621 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.454095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.454145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.454157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.454176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.454190 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.556969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.557013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.557026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.557043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.557054 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.659983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.660020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.660034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.660063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.660076 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.762712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.762770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.762783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.762806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.762822 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.866562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.866639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.866652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.866671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.866706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.969638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.969720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.969742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.969770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:06 crc kubenswrapper[4833]: I0127 14:12:06.969790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:06Z","lastTransitionTime":"2026-01-27T14:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.072824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.072868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.072880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.072898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.072910 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.169225 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:28:33.046080333 +0000 UTC Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.175720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.175827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.175838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.175858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.175904 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.210496 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.210584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.210496 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:07 crc kubenswrapper[4833]: E0127 14:12:07.210692 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:07 crc kubenswrapper[4833]: E0127 14:12:07.210855 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:07 crc kubenswrapper[4833]: E0127 14:12:07.210971 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.280087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.280492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.280520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.280594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.282498 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.389586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.389649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.389660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.389680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.389692 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.493120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.493177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.493190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.493207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.493219 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.596832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.596889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.596902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.596927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.596945 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.700496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.700550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.700561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.700580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.700594 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.802893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.802927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.802937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.802953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.802963 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.878918 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6jftn"] Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.879510 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.883514 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.883644 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.883676 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.884576 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.895878 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.905746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.905857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.905933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.905969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.905984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:07Z","lastTransitionTime":"2026-01-27T14:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.912058 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.918719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3250d272-0963-40d7-8e9b-7b0129ee4620-host\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.918760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3250d272-0963-40d7-8e9b-7b0129ee4620-serviceca\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.918808 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xplp\" (UniqueName: \"kubernetes.io/projected/3250d272-0963-40d7-8e9b-7b0129ee4620-kube-api-access-7xplp\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.932788 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.941162 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.945538 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.951290 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.955707 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.968351 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.980577 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:07 crc kubenswrapper[4833]: I0127 14:12:07.993778 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.006836 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.008190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.008242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.008254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.008274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.008288 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.019390 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.019815 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xplp\" (UniqueName: \"kubernetes.io/projected/3250d272-0963-40d7-8e9b-7b0129ee4620-kube-api-access-7xplp\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.019915 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3250d272-0963-40d7-8e9b-7b0129ee4620-host\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.019954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3250d272-0963-40d7-8e9b-7b0129ee4620-serviceca\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.020159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3250d272-0963-40d7-8e9b-7b0129ee4620-host\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.021173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3250d272-0963-40d7-8e9b-7b0129ee4620-serviceca\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.029542 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.037923 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xplp\" (UniqueName: \"kubernetes.io/projected/3250d272-0963-40d7-8e9b-7b0129ee4620-kube-api-access-7xplp\") pod \"node-ca-6jftn\" (UID: \"3250d272-0963-40d7-8e9b-7b0129ee4620\") " pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.040063 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.052011 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.065829 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.078489 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.092389 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.106590 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.110841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.110902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.110917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.110937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.110952 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.124491 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.135813 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.148718 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.161559 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.170069 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:17:23.023369345 +0000 UTC Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.180859 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.194578 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.196723 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6jftn" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.206935 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: W0127 14:12:08.209769 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3250d272_0963_40d7_8e9b_7b0129ee4620.slice/crio-9f430d03f015111543f191504e472e534473a161be207c2dbe31a109b4fd26f6 WatchSource:0}: Error finding container 9f430d03f015111543f191504e472e534473a161be207c2dbe31a109b4fd26f6: Status 404 returned error can't find the container with id 9f430d03f015111543f191504e472e534473a161be207c2dbe31a109b4fd26f6 Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.213200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.213257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.213276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.213297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.213310 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.220269 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.238052 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.251856 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.265164 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.275320 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.290201 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.317665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.317707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.317717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.317737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.317750 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.398756 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6jftn" event={"ID":"3250d272-0963-40d7-8e9b-7b0129ee4620","Type":"ContainerStarted","Data":"9f430d03f015111543f191504e472e534473a161be207c2dbe31a109b4fd26f6"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.420565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.420614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.420627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.420647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.420657 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.523422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.523492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.523509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.523531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.523546 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.627029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.627873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.628005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.628029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.628047 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.731202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.731268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.731279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.731301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.731316 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.833941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.833997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.834009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.834028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.834042 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.927858 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:08 crc kubenswrapper[4833]: E0127 14:12:08.928215 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:12:16.92817088 +0000 UTC m=+38.579495282 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.937032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.937096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.937107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.937126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.937140 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:08Z","lastTransitionTime":"2026-01-27T14:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:08 crc kubenswrapper[4833]: I0127 14:12:08.978765 4833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.000538 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.001254 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.001481 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.029106 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.029168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.029713 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.030004 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030070 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030159 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030177 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:17.030152166 +0000 UTC m=+38.681476568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030232 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:17.030204537 +0000 UTC m=+38.681528939 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030331 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030344 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030362 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030395 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:17.030382201 +0000 UTC m=+38.681706603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030470 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030482 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030492 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.030526 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:17.030511415 +0000 UTC m=+38.681835807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.040659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.040711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.040724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.040745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.040763 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.143310 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.143359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.143371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.143392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.143404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.171047 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:01:59.142991118 +0000 UTC Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.210564 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.210679 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.210763 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.210870 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.211310 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.211583 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.232830 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.244921 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.245801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.245846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.245857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.245877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.245888 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.257529 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.272378 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.287537 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.299637 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.309842 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.322767 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.334319 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.345494 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.348236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.348286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.348306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.348326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.348335 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.354349 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.366918 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.379549 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.394104 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.402341 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6jftn" event={"ID":"3250d272-0963-40d7-8e9b-7b0129ee4620","Type":"ContainerStarted","Data":"3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.411732 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.430022 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.445379 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.450937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.450977 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.450989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.451027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.451038 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.457657 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.471042 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.481073 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.493121 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.503197 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.517980 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.530678 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.540664 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.549647 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.554147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.554204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.554216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.554235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.554248 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.561670 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.573931 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.587743 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.605244 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.656212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.656265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.656275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.656294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.656312 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.758611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.758662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.758673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.758691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.758702 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.861516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.861574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.861592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.861615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.861630 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.960589 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.961294 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:12:09 crc kubenswrapper[4833]: E0127 14:12:09.961499 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.965418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.965480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.965489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.965507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:09 crc kubenswrapper[4833]: I0127 14:12:09.965519 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:09Z","lastTransitionTime":"2026-01-27T14:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.068339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.068390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.068402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.068423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.068434 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.171208 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:15:00.238013141 +0000 UTC Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.171357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.171390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.171402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.171421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.171469 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.274437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.274526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.274561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.274582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.274594 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.377475 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.377547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.377561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.377592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.377610 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.480258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.480308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.480321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.480343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.480357 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.582948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.583030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.583042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.583061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.583075 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.686097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.686139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.686152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.686170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.686182 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.789607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.789665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.789675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.789695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.789708 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.892838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.892907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.892929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.892953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.892967 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.996831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.996878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.996889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.996907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:10 crc kubenswrapper[4833]: I0127 14:12:10.996920 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:10Z","lastTransitionTime":"2026-01-27T14:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.099953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.100003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.100013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.100033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.100045 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.172095 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:06:38.730281182 +0000 UTC Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.202715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.202769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.202783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.202801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.202813 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.210557 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.210689 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:11 crc kubenswrapper[4833]: E0127 14:12:11.210704 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.210568 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:11 crc kubenswrapper[4833]: E0127 14:12:11.211194 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:11 crc kubenswrapper[4833]: E0127 14:12:11.211283 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.305822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.305870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.305879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.305901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.305911 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.407598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.407635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.407645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.407659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.407669 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.516151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.516217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.516233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.516256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.516272 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.619811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.619868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.619880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.619899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.619912 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.722502 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.722576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.722600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.722628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.722651 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.826661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.826748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.826771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.826804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.826824 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.929920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.929974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.929987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.930009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:11 crc kubenswrapper[4833]: I0127 14:12:11.930023 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:11Z","lastTransitionTime":"2026-01-27T14:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.033635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.033669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.033677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.033697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.033708 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.136383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.136430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.136456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.136476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.136489 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.172409 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:21:41.556311016 +0000 UTC Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.238751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.238805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.238815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.238833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.238850 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.286573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.286613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.286623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.286641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.286651 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: E0127 14:12:12.299326 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.302925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.302964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.302976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.302996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.303008 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: E0127 14:12:12.312869 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.317162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.317201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.317214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.317242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.317256 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: E0127 14:12:12.327775 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.332022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.332058 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.332067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.332084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.332093 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: E0127 14:12:12.341961 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.345271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.345303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.345313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.345329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.345342 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: E0127 14:12:12.357497 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:12 crc kubenswrapper[4833]: E0127 14:12:12.357624 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.359370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.359400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.359409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.359428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.359439 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.461789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.461834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.461844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.461863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.461873 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.564370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.564468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.564482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.564506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.564517 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.667772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.667835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.667848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.667870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.667881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.771705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.771852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.771881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.771949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.772044 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.875588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.875638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.875652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.875677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.875690 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.978721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.978790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.978805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.978823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:12 crc kubenswrapper[4833]: I0127 14:12:12.978836 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:12Z","lastTransitionTime":"2026-01-27T14:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.081086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.081158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.081176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.081202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.081216 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.172960 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:44:19.870444623 +0000 UTC Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.185752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.185839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.185854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.185878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.185903 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.209776 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.209839 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:13 crc kubenswrapper[4833]: E0127 14:12:13.209984 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.210095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:13 crc kubenswrapper[4833]: E0127 14:12:13.210296 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:13 crc kubenswrapper[4833]: E0127 14:12:13.210614 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.289807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.289881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.289893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.289913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.289928 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.397999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.398055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.398064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.398082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.398099 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.501469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.501509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.501520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.501540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.501569 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.604281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.604320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.604330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.604347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.604359 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.708108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.708148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.708158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.708173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.708187 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.810819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.810879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.810891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.810912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.810924 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.913725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.913779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.913789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.913805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:13 crc kubenswrapper[4833]: I0127 14:12:13.913816 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:13Z","lastTransitionTime":"2026-01-27T14:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.016679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.016741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.016753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.016770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.016781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.119468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.119524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.119534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.119552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.119562 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.173890 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:30:01.78420455 +0000 UTC Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.222720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.222756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.222765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.222778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.222792 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.327154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.327218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.327227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.327244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.327259 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.430971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.431048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.431062 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.431085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.431099 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.533761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.533893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.533907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.533930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.533943 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.637168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.637240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.637262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.637288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.637309 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.741068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.741120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.741130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.741148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.741166 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.765146 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d"] Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.765811 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.768254 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.768795 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.782077 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.792530 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.799873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24d7702c-8ba7-4782-a39f-5104f5878a28-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.799926 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24d7702c-8ba7-4782-a39f-5104f5878a28-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.799956 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5njfx\" (UniqueName: \"kubernetes.io/projected/24d7702c-8ba7-4782-a39f-5104f5878a28-kube-api-access-5njfx\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.800024 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24d7702c-8ba7-4782-a39f-5104f5878a28-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.805065 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.819751 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.830992 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.842331 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.844344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.844375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.844387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.844404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.844417 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.876944 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.900782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24d7702c-8ba7-4782-a39f-5104f5878a28-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.900887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24d7702c-8ba7-4782-a39f-5104f5878a28-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.900925 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5njfx\" (UniqueName: \"kubernetes.io/projected/24d7702c-8ba7-4782-a39f-5104f5878a28-kube-api-access-5njfx\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.900970 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24d7702c-8ba7-4782-a39f-5104f5878a28-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.902379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/24d7702c-8ba7-4782-a39f-5104f5878a28-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.902844 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/24d7702c-8ba7-4782-a39f-5104f5878a28-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.911380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/24d7702c-8ba7-4782-a39f-5104f5878a28-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.921951 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5njfx\" (UniqueName: \"kubernetes.io/projected/24d7702c-8ba7-4782-a39f-5104f5878a28-kube-api-access-5njfx\") pod \"ovnkube-control-plane-749d76644c-q2f5d\" (UID: \"24d7702c-8ba7-4782-a39f-5104f5878a28\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.922830 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.938852 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.947070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.947145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.947158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.947185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.947202 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:14Z","lastTransitionTime":"2026-01-27T14:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.953053 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.963026 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.971583 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.982320 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:14 crc kubenswrapper[4833]: I0127 14:12:14.992601 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.000907 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.010302 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.049839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.049889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.049905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.049929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.049945 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.082504 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" Jan 27 14:12:15 crc kubenswrapper[4833]: W0127 14:12:15.104130 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d7702c_8ba7_4782_a39f_5104f5878a28.slice/crio-d86e1a39db51f435b37acc1569074148b32a64c6b894f5027b9d1aae222654fb WatchSource:0}: Error finding container d86e1a39db51f435b37acc1569074148b32a64c6b894f5027b9d1aae222654fb: Status 404 returned error can't find the container with id d86e1a39db51f435b37acc1569074148b32a64c6b894f5027b9d1aae222654fb Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.153110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.153153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.153164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.153181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.153192 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.174924 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:32:18.086291268 +0000 UTC Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.210379 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.210569 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:15 crc kubenswrapper[4833]: E0127 14:12:15.210750 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.211289 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:15 crc kubenswrapper[4833]: E0127 14:12:15.211465 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:15 crc kubenswrapper[4833]: E0127 14:12:15.211853 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.256634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.256681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.256697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.256717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.256730 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.360519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.361308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.361324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.361344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.361359 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.421897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" event={"ID":"24d7702c-8ba7-4782-a39f-5104f5878a28","Type":"ContainerStarted","Data":"d86e1a39db51f435b37acc1569074148b32a64c6b894f5027b9d1aae222654fb"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.464619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.464654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.464665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.464684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.464698 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.567924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.567970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.567985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.568002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.568014 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.670202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.670239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.670248 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.670264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.670277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.773278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.773354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.773366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.773383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.773395 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.898300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.898358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.898372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.898392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:15 crc kubenswrapper[4833]: I0127 14:12:15.898409 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:15Z","lastTransitionTime":"2026-01-27T14:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.001659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.001710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.001721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.001739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.001751 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.105466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.105521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.105533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.105555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.105569 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.175530 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:06:06.813252886 +0000 UTC Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.208777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.208820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.208829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.208844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.208854 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.239338 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jxvwd"] Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.239952 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: E0127 14:12:16.240036 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.252924 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.268368 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.280511 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.291528 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.301100 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.311951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.312001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.312015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.312034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.312047 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.314984 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.321906 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxvp\" (UniqueName: \"kubernetes.io/projected/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-kube-api-access-sgxvp\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.321958 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.330676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.345985 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.370014 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.394835 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.409575 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.415427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.415497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.415512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.415536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.415552 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.422865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxvp\" (UniqueName: \"kubernetes.io/projected/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-kube-api-access-sgxvp\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.422912 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: E0127 14:12:16.423095 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:16 crc kubenswrapper[4833]: E0127 14:12:16.423181 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:16.923155737 +0000 UTC m=+38.574480139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.425550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.428276 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerStarted","Data":"5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.430677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xjvwp" event={"ID":"fa8c488b-eed2-4666-a5c3-6aa129655eee","Type":"ContainerStarted","Data":"657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.432475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.435321 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.435390 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.437677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" event={"ID":"24d7702c-8ba7-4782-a39f-5104f5878a28","Type":"ContainerStarted","Data":"e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.437730 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" event={"ID":"24d7702c-8ba7-4782-a39f-5104f5878a28","Type":"ContainerStarted","Data":"ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.439014 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" exitCode=0 Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.439080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.441245 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.448251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxvp\" (UniqueName: \"kubernetes.io/projected/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-kube-api-access-sgxvp\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.458145 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.474933 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.487189 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.497736 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.510670 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.518646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.518687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.518697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.518712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.519966 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.525296 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.540982 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.571290 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.588929 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.604980 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.618061 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.622273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.622326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.622336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.622355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.622366 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.629491 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.642366 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.653483 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.666964 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.679039 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.692832 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.705049 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.715871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.723918 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.725660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.725693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.725704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.725723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.725739 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.733846 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.828139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.828178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.828190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.828209 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.828222 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.929791 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.929955 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:16 crc kubenswrapper[4833]: E0127 14:12:16.930423 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:16 crc kubenswrapper[4833]: E0127 14:12:16.930512 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:17.930493853 +0000 UTC m=+39.581818255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:16 crc kubenswrapper[4833]: E0127 14:12:16.930622 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:12:32.930614116 +0000 UTC m=+54.581938518 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.933815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.933856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.933871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.933893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:16 crc kubenswrapper[4833]: I0127 14:12:16.933911 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:16Z","lastTransitionTime":"2026-01-27T14:12:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.031593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.032192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.031768 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.032253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032305 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:33.032272874 +0000 UTC m=+54.683597286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032386 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032403 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032417 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032486 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:33.032469009 +0000 UTC m=+54.683793431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.032544 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032560 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032597 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:33.032587412 +0000 UTC m=+54.683911814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032655 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032671 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032681 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.032726 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:33.032716195 +0000 UTC m=+54.684040607 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.036551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.036596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.036606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.036623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.036634 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:17Z","lastTransitionTime":"2026-01-27T14:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.138940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.138985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.139000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.139018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.139034 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:17Z","lastTransitionTime":"2026-01-27T14:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.176381 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:16:37.31131794 +0000 UTC Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.209849 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.209907 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.210028 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.210164 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.210259 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.210364 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.242034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.242101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.242114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.242133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.242149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:17Z","lastTransitionTime":"2026-01-27T14:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.345339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.345385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.345398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.345420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.345436 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:17Z","lastTransitionTime":"2026-01-27T14:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.840466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.840545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.840583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.840629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.840658 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:17Z","lastTransitionTime":"2026-01-27T14:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.845175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.845360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.845503 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.847573 4833 generic.go:334] "Generic (PLEG): container finished" podID="f348cf7e-0a0b-400a-af50-1e342385c42d" containerID="5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188" exitCode=0 Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.847662 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerDied","Data":"5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.849926 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.856017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.856068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.864522 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.878813 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.893513 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.916602 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.932642 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.937978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.938259 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: E0127 14:12:17.938721 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:19.938701547 +0000 UTC m=+41.590025949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.946847 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.949422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.949483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.949499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.949519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.949533 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:17Z","lastTransitionTime":"2026-01-27T14:12:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.961532 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.973906 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:17 crc kubenswrapper[4833]: I0127 14:12:17.987917 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.001263 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:17Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.018342 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.036740 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.053397 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.054427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.054492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.054505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.054526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.054539 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.067195 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.079412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.090697 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.109303 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.124261 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.137909 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.153252 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.157620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.157664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.157677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.157699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.157711 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.176587 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:40:18.610640051 +0000 UTC Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.177219 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.195916 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.205788 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.210268 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:18 crc kubenswrapper[4833]: E0127 14:12:18.210626 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.221318 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.232263 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.245944 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.260138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.260201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.260216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.260235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.260250 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.270047 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.288500 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.305254 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.323069 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.339808 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.352412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.362783 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.363720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.363778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.363794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.363813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.363826 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.378167 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.467431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.467517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.467529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.467546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.467559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.571291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.571349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.571359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.571381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.571393 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.674580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.674635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.674645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.674663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.674674 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.777809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.777870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.777889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.777955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.777971 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.864139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerStarted","Data":"d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.865555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerStarted","Data":"378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.868766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.881077 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.881253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.881312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.881345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.881378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.881396 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.894150 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.919890 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.937110 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.953296 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.966676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.984245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.984424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.984601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.984719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.984831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:18Z","lastTransitionTime":"2026-01-27T14:12:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:18 crc kubenswrapper[4833]: I0127 14:12:18.987174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.001400 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:18Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.017490 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.032489 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.051248 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.062934 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.074352 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.088171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.088535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.088614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.088714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.088785 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.094313 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.113322 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.129072 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.146970 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.164078 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.177903 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:17:43.621112991 +0000 UTC Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.180419 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.194414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.194501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.194514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.194532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.194561 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.199018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.209564 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.209639 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.209587 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:19 crc kubenswrapper[4833]: E0127 14:12:19.209762 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:19 crc kubenswrapper[4833]: E0127 14:12:19.209822 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:19 crc kubenswrapper[4833]: E0127 14:12:19.210034 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.220861 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.244258 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.258419 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.271897 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.285865 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.296898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.296957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.296973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.297022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.297034 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.302908 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.323252 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.336622 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.348956 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.362707 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.377097 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.396598 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.400686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.400737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.400755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.400780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.400796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.409563 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.422400 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.437277 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.452651 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.465556 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.491649 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.503877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.503943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.503954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.503980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.503991 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.509421 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.523285 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.535098 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.553018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.567690 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.584332 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.597487 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.607653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.607707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.607720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.607739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.607752 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.609797 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.622625 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.638039 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.654957 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.676765 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.692969 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.711422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.711499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.711518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.711541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.711558 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.814966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.815008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.815018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.815033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.815045 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.876463 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.876523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.918376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.918425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.918437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.918534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.918551 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:19Z","lastTransitionTime":"2026-01-27T14:12:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:19 crc kubenswrapper[4833]: I0127 14:12:19.956395 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:19 crc kubenswrapper[4833]: E0127 14:12:19.956580 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:19 crc kubenswrapper[4833]: E0127 14:12:19.956648 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:23.956627573 +0000 UTC m=+45.607951975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.022298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.022337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.022346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.022364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.022375 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.125617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.125662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.125674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.125694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.125706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.178844 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:36:18.224543256 +0000 UTC Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.210629 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:20 crc kubenswrapper[4833]: E0127 14:12:20.210871 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.229172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.229259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.229277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.229302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.229321 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.332738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.332814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.332832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.332862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.332882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.436303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.436399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.436410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.436431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.436467 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.539493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.539614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.539634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.539661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.539677 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.643050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.643089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.643098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.643116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.643127 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.746380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.746421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.746473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.746493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.746509 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.848682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.848734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.848743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.848757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.848766 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.951732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.951765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.951773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.951786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:20 crc kubenswrapper[4833]: I0127 14:12:20.951795 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:20Z","lastTransitionTime":"2026-01-27T14:12:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.054092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.054133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.054142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.054161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.054174 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.157058 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.157105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.157117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.157135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.157148 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.179751 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:01:52.038921152 +0000 UTC Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.210392 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.210430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.210473 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:21 crc kubenswrapper[4833]: E0127 14:12:21.210590 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:21 crc kubenswrapper[4833]: E0127 14:12:21.210711 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:21 crc kubenswrapper[4833]: E0127 14:12:21.210792 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.260526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.260573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.260584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.260602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.260615 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.363487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.363812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.363902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.364000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.364141 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.466953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.467002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.467014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.467030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.467041 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.572895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.572947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.572959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.572975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.572986 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.676193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.676237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.676246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.676262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.676275 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.779692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.779764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.779986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.780015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.780039 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.882467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.882522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.882534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.882551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.882565 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.886719 4833 generic.go:334] "Generic (PLEG): container finished" podID="f348cf7e-0a0b-400a-af50-1e342385c42d" containerID="d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535" exitCode=0 Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.886776 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerDied","Data":"d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.911002 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.932232 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.946768 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.961096 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.976875 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.985374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.985412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.985421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.985458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.985470 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:21Z","lastTransitionTime":"2026-01-27T14:12:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:21 crc kubenswrapper[4833]: I0127 14:12:21.989084 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.001756 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.018243 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.033245 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.052776 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.068936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.083642 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.088717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.088755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.088765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.088780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.088790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.104210 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.126820 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.140329 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.152385 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.162612 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.180918 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:10:33.430440574 +0000 UTC Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.192787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.192841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.192853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.192872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.192890 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.210645 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.211016 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.211263 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.295779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.295825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.295836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.295853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.295864 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.399497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.399525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.399535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.399548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.399559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.502736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.502776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.502785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.502799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.502810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.606565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.606611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.606621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.606638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.606650 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.655633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.655680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.655690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.655706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.655729 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.669063 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.673467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.673504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.673514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.673533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.673544 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.690654 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.695544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.695594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.695607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.695626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.695939 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.714264 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.719492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.719548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.719560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.719577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.719589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.733883 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.739541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.739878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.739972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.740113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.740230 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.756266 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: E0127 14:12:22.756435 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.759039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.759086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.759101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.759117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.759129 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.862659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.862728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.862747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.862775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.862797 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.897485 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.899734 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.903238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.903981 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.908724 4833 generic.go:334] "Generic (PLEG): container finished" podID="f348cf7e-0a0b-400a-af50-1e342385c42d" containerID="32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0" exitCode=0 Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.908770 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerDied","Data":"32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.924543 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.941738 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.957860 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.964897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.964948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.964957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.964975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.964984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:22Z","lastTransitionTime":"2026-01-27T14:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.970950 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.984810 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:22 crc kubenswrapper[4833]: I0127 14:12:22.997546 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:22Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.011192 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.024954 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.037211 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.060147 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.067608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.067658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.067669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.067687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.067699 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.074220 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.085579 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.105180 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.117948 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.147965 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.166343 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.171101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.171142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.171171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.171456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.171472 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.182018 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:37:16.870673958 +0000 UTC Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.182222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.196594 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.210612 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.210699 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.210837 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.211741 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:23 crc kubenswrapper[4833]: E0127 14:12:23.211981 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:23 crc kubenswrapper[4833]: E0127 14:12:23.212087 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:23 crc kubenswrapper[4833]: E0127 14:12:23.211573 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.225038 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.245143 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.260367 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.274096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.274134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.274146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.274161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.274173 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.274266 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.285515 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.300329 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.317021 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.334408 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.349896 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.362235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.375397 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.377300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.377333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.377348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.377367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.377380 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.390219 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.410353 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.431230 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.446130 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.480636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.480674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.480686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.480703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.480717 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.584137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.584178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.584191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.584210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.584222 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.687422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.687492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.687505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.687521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.687533 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.791113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.791419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.791602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.791714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.791813 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.898281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.898328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.898340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.898360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.898373 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:23Z","lastTransitionTime":"2026-01-27T14:12:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.915767 4833 generic.go:334] "Generic (PLEG): container finished" podID="f348cf7e-0a0b-400a-af50-1e342385c42d" containerID="cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f" exitCode=0 Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.915852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerDied","Data":"cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f"} Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.938174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.951375 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.975796 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.992014 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:23 crc kubenswrapper[4833]: I0127 14:12:23.995942 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:23 crc kubenswrapper[4833]: E0127 14:12:23.996069 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:23 crc kubenswrapper[4833]: E0127 14:12:23.996141 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:31.996122114 +0000 UTC m=+53.647446516 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.002474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.002532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.002550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.002573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.002591 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.012696 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.028198 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.044746 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.066321 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.088678 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.105430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.105503 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.105514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.105532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.105544 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.106436 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.120172 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.131033 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.142676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.157837 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.170791 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.183044 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:02:56.321841149 +0000 UTC Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.187413 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.208353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.208391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.208401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.208418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.208429 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.209559 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:24 crc kubenswrapper[4833]: E0127 14:12:24.209700 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.214660 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.310962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.310993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.311003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.311019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.311028 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.413852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.413902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.413913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.413932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.413944 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.535236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.535299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.535311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.535331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.535344 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.638947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.638986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.638997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.639012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.639022 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.742732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.742780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.742792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.742810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.742823 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.846083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.846804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.846833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.846861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.846881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.950036 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.950093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.950105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.950124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:24 crc kubenswrapper[4833]: I0127 14:12:24.950136 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:24Z","lastTransitionTime":"2026-01-27T14:12:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.052334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.052393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.052404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.052423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.052437 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.154867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.154951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.154967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.154988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.155001 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.183874 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:38:23.285242471 +0000 UTC Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.209883 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.210026 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:25 crc kubenswrapper[4833]: E0127 14:12:25.210178 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.210271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:25 crc kubenswrapper[4833]: E0127 14:12:25.210408 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:25 crc kubenswrapper[4833]: E0127 14:12:25.210405 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.258901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.259151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.259167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.259194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.259215 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.366304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.366376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.366394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.366423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.366559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.471995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.472049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.472065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.472087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.472105 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.574754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.574814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.574833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.574858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.574876 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.677661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.677740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.677764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.677793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.677811 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.781559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.781606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.781616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.781634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.781646 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.883988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.884026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.884036 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.884051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.884061 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.932043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerStarted","Data":"f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.939112 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.939493 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.939538 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.939734 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.953174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.979092 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.982593 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.983066 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.986987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.987046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.987059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.987081 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.987100 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:25Z","lastTransitionTime":"2026-01-27T14:12:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:25 crc kubenswrapper[4833]: I0127 14:12:25.996434 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:25Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.012869 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.029208 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.046886 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.062260 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.076243 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.089569 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.089838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.089874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.089888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.089907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.089923 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.103537 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.115068 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.129318 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.143283 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.163309 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.177947 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.184998 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:52:29.183622068 +0000 UTC Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.192210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.192256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.192267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.192288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.192302 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.193521 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.210073 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:26 crc kubenswrapper[4833]: E0127 14:12:26.210220 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.218313 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.252951 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.269786 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.284232 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.294605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.294651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.294664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.294683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.294697 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.298434 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.314785 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.329985 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.343507 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.358042 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.369226 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.382509 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.397357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.397400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.397412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.397433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.397473 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.398892 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.412775 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.426725 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.443384 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.461793 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.477296 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.499952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.499999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.500008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.500028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.500042 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.507215 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.603113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.603163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.603174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.603194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.603207 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.710281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.710334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.710346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.710364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.710376 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.813652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.813707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.813718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.813736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.813747 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.917238 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.917287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.917300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.917315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.917325 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:26Z","lastTransitionTime":"2026-01-27T14:12:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.945969 4833 generic.go:334] "Generic (PLEG): container finished" podID="f348cf7e-0a0b-400a-af50-1e342385c42d" containerID="f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c" exitCode=0 Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.947002 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerDied","Data":"f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c"} Jan 27 14:12:26 crc kubenswrapper[4833]: I0127 14:12:26.977020 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:26Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.004376 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.021351 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.021520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.021565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.021579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.021601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.021615 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.039234 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.059248 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.075141 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.096651 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.113354 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.125035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.125083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.125097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.125116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.125126 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.127474 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.143031 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.161659 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.176605 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.185532 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:59:28.640632341 +0000 UTC Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.192018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.209353 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.209869 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:27 crc kubenswrapper[4833]: E0127 14:12:27.210090 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.210569 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:27 crc kubenswrapper[4833]: E0127 14:12:27.210705 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.210753 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:27 crc kubenswrapper[4833]: E0127 14:12:27.210830 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.223362 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.228545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.228606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.228624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.228644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.228658 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.236716 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.250734 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.331751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.331807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.331820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.331841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.331856 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.434774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.434840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.434854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.434875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.434887 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.537937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.537978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.537987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.538002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.538011 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.640626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.640676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.640686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.640704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.640719 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.743385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.743468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.743483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.743504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.743521 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.847708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.847755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.847766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.847790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.847802 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.950438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.950531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.950543 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.950564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.950580 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:27Z","lastTransitionTime":"2026-01-27T14:12:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.959487 4833 generic.go:334] "Generic (PLEG): container finished" podID="f348cf7e-0a0b-400a-af50-1e342385c42d" containerID="ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb" exitCode=0 Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.959571 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerDied","Data":"ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb"} Jan 27 14:12:27 crc kubenswrapper[4833]: I0127 14:12:27.990836 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:27Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.010497 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.025841 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.039886 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.054271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.054307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.054316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.054335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.054314 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.054344 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.071674 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.088352 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.106491 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.157916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.157971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.157983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.158051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.158065 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.159979 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.174680 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.185968 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:30:12.164653527 +0000 UTC Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.192865 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.209763 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:28 crc kubenswrapper[4833]: E0127 14:12:28.209952 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.210523 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.231736 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.250884 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.260884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.260918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.260927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.260946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.260958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.264773 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.280396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.295875 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.364185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.364821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.364844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.364869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.364885 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.468316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.468372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.468385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.468405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.468418 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.571973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.572487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.572634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.572780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.573066 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.680501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.680554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.680563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.680584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.680595 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.783143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.783187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.783196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.783218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.783227 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.885836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.885920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.885932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.885950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.885962 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.970136 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" event={"ID":"f348cf7e-0a0b-400a-af50-1e342385c42d","Type":"ContainerStarted","Data":"bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.972708 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/0.log" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.976576 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3" exitCode=1 Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.976640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.977426 4833 scope.go:117] "RemoveContainer" containerID="51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.988220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.988274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.988284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.988301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.988311 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:28Z","lastTransitionTime":"2026-01-27T14:12:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:28 crc kubenswrapper[4833]: I0127 14:12:28.994814 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:28Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.012053 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.025507 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.046001 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.062812 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.076254 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.087315 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.091552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.091589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.091601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.091617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.091629 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.100313 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.112894 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.124110 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.136630 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.148003 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.164170 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.179489 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.186215 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:48:16.302704315 +0000 UTC Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.195533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.195623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.195642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.195665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.195430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.195680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.211625 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.211625 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.211647 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:29 crc kubenswrapper[4833]: E0127 14:12:29.212090 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:29 crc kubenswrapper[4833]: E0127 14:12:29.212157 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:29 crc kubenswrapper[4833]: E0127 14:12:29.212308 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.221822 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.236051 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.248574 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.260396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.277469 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.294233 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.298208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.298252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.298262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.298279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.298292 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.313186 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.329644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.353316 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.367906 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.400787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.400831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.400844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.400884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.400895 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.408276 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0127 14:12:28.094290 6131 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.094392 6131 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 14:12:28.094759 6131 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.094979 6131 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095057 6131 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095146 6131 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.095401 6131 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.436017 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.449975 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.465290 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.480971 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.497651 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.504304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.504384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.504407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.504436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.504491 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.515718 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.531839 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.546325 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.561481 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.583829 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.599502 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.608204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.608394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.608605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.608745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.608913 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.616478 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.628169 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.645439 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.660198 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.678055 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.695564 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.709533 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.712013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.712061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.712078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.712101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.712122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.726621 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.739345 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.757947 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.783324 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0127 14:12:28.094290 6131 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.094392 6131 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 14:12:28.094759 6131 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.094979 6131 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095057 6131 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095146 6131 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.095401 6131 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.809043 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.816464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.816515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.816529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.816549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.816564 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.824786 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.845510 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.920048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.920101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.920112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.920131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.920143 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:29Z","lastTransitionTime":"2026-01-27T14:12:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.982163 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/0.log" Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.984696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5"} Jan 27 14:12:29 crc kubenswrapper[4833]: I0127 14:12:29.985317 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.002689 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.018018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.023357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.023397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.023407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.023425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.023435 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.033400 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.044820 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.056809 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.069930 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.081764 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.099213 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.113696 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.126103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.126154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.126164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.126181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.126194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.131420 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.160307 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.175756 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.187622 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:41:54.596221832 +0000 UTC Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.200093 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0127 14:12:28.094290 6131 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.094392 6131 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 14:12:28.094759 6131 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.094979 6131 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095057 6131 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095146 6131 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.095401 6131 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.210556 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:30 crc kubenswrapper[4833]: E0127 14:12:30.210727 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.223328 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.229063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.229125 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.229140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.229166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.229182 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.237796 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.255787 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.268345 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.331976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.332036 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.332049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.332070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.332085 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.434290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.434653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.434723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.434794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.434853 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.538482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.538551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.538568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.538593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.538610 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.642162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.642243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.642257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.642286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.642302 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.745723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.745759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.745770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.745787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.745796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.872570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.872669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.872695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.873311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.873369 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.976342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.976409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.976432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.976508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.976532 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:30Z","lastTransitionTime":"2026-01-27T14:12:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.990431 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/1.log" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.991083 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/0.log" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.994878 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5" exitCode=1 Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.994962 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5"} Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.995040 4833 scope.go:117] "RemoveContainer" containerID="51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3" Jan 27 14:12:30 crc kubenswrapper[4833]: I0127 14:12:30.997881 4833 scope.go:117] "RemoveContainer" containerID="9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5" Jan 27 14:12:30 crc kubenswrapper[4833]: E0127 14:12:30.998402 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.018281 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.048399 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.068942 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.078804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.078856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.078871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.078890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.078904 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.092767 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.110104 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.125915 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.141890 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.162745 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fb4d63eedec7d3a8a98ce063e674c71755f23da6f5f4093a81794df81dd8f3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"message\\\":\\\"rmers/externalversions/factory.go:140\\\\nI0127 14:12:28.094290 6131 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.094392 6131 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 14:12:28.094759 6131 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.094979 6131 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095057 6131 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 14:12:28.095146 6131 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:12:28.095401 6131 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.177864 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.182212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.182257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.182267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.182286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.182295 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.188387 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:08:08.379659025 +0000 UTC Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.191503 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.205541 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.210553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.210578 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.210681 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:31 crc kubenswrapper[4833]: E0127 14:12:31.210823 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:31 crc kubenswrapper[4833]: E0127 14:12:31.210939 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:31 crc kubenswrapper[4833]: E0127 14:12:31.211039 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.219078 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.243994 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.259827 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.271671 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.285134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.285184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.285193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.285211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.285224 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.285810 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.301328 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.393217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.393269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.393283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.393300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.393316 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.497480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.497529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.497541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.497560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.497572 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.601057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.601117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.601135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.601356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.601374 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.703724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.703790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.703803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.703819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.703831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.806633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.806674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.806685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.806701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.806712 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.909575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.909610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.909640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.909657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:31 crc kubenswrapper[4833]: I0127 14:12:31.909669 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:31Z","lastTransitionTime":"2026-01-27T14:12:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.001551 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/1.log" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.007592 4833 scope.go:117] "RemoveContainer" containerID="9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5" Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.007816 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.012144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.012194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.012207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.012230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.012247 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.029514 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.048564 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.067607 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.090499 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.090770 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.090899 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:12:48.09086551 +0000 UTC m=+69.742189912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.092415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.114971 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.115334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.115367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.115378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.115396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.115406 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.132690 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.153581 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.172053 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.189508 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:14:45.046008505 +0000 UTC Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.200715 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.210393 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.210767 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.218597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.218649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.218662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.218680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.218696 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.219883 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.236632 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.250430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.267212 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.281970 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.295738 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.311061 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.321433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.321500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.321514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.321531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.321542 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.324857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.424283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.424336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.424345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.424358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.424367 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.527677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.527735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.527759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.527787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.527808 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.631171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.631234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.631244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.631263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.631299 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.734764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.734828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.734842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.734863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.734875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.828170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.828242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.828254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.828277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.828295 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.843744 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.849034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.849086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.849099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.849123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.849138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.862411 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.879529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.879593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.879612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.879637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.879650 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.893356 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.897563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.897603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.897614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.897635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.897646 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.910871 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.915280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.915313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.915323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.915338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.915349 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.928302 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.928485 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.930418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.930484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.930497 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.930514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.930528 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:32Z","lastTransitionTime":"2026-01-27T14:12:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:32 crc kubenswrapper[4833]: I0127 14:12:32.998218 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:12:32 crc kubenswrapper[4833]: E0127 14:12:32.998697 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:13:04.998674606 +0000 UTC m=+86.649998998 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.033046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.033082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.033092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.033106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.033129 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.099159 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.099224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.099256 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.099300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099390 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099458 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099477 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099492 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099507 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099533 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:05.099517054 +0000 UTC m=+86.750841456 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099560 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:05.099548985 +0000 UTC m=+86.750873387 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099465 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099632 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099422 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099765 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:05.09974253 +0000 UTC m=+86.751067012 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.099814 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:05.099803771 +0000 UTC m=+86.751128273 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.135293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.135350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.135360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.135376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.135432 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.189912 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 17:25:35.84700153 +0000 UTC Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.209784 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.209975 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.209996 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.210068 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.210474 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:33 crc kubenswrapper[4833]: E0127 14:12:33.210276 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.239163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.239276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.239335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.239359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.239428 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.341854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.341903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.341915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.341932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.341951 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.445490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.445544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.445562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.445585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.445600 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.548523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.548593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.548631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.548649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.548661 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.651368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.651433 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.651459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.651484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.651497 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.755069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.755152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.755192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.755279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.755357 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.859118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.859155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.859164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.859186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.859196 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.962306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.962378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.962401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.962424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:33 crc kubenswrapper[4833]: I0127 14:12:33.962437 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:33Z","lastTransitionTime":"2026-01-27T14:12:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.065264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.065325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.065343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.065377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.065394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.168675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.168748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.168761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.168784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.168801 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.191167 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:56:49.250575751 +0000 UTC Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.209822 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:34 crc kubenswrapper[4833]: E0127 14:12:34.210152 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.296801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.296878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.296896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.296921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.296938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.399597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.399653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.399667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.399689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.399705 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.502865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.503170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.503411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.503511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.503568 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.606696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.606780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.606801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.606827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.606846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.710657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.710731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.710749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.710777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.710803 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.820393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.820500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.820522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.820547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.820565 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.924771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.924835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.924852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.924874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:34 crc kubenswrapper[4833]: I0127 14:12:34.924891 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:34Z","lastTransitionTime":"2026-01-27T14:12:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.028082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.028130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.028146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.028167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.028181 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.130714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.130748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.130765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.130784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.130797 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.191684 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:29:18.105779167 +0000 UTC Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.210585 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.210693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.210591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:35 crc kubenswrapper[4833]: E0127 14:12:35.210812 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:35 crc kubenswrapper[4833]: E0127 14:12:35.211085 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:35 crc kubenswrapper[4833]: E0127 14:12:35.211285 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.233664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.233984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.234100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.234188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.234309 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.338098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.338252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.338282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.338307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.338324 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.441117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.441178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.441199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.441225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.441247 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.544993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.545440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.545739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.545900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.546079 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.649336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.649780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.650070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.650277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.650485 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.753260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.753327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.753347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.753375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.753394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.850942 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.855565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.855613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.855637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.855667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.855691 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.866026 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.869527 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.898691 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.913671 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.924483 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.936528 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.952010 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.958755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.958793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.958803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.958818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.958843 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:35Z","lastTransitionTime":"2026-01-27T14:12:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.969909 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:35 crc kubenswrapper[4833]: I0127 14:12:35.985839 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.001142 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.017301 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.032854 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.057295 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.062150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.062205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.062221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.062244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.062259 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.074600 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.086029 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.097828 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.113905 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.128804 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.165859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.165895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.165903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.165917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.165927 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.192641 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:00:20.600978691 +0000 UTC Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.210255 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:36 crc kubenswrapper[4833]: E0127 14:12:36.210563 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.269164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.269239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.269264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.269299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.269321 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.373176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.373245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.373256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.373270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.373281 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.475974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.476018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.476029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.476046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.476056 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.578658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.578712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.578728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.578751 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.578765 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.682158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.682237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.682255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.682278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.682293 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.785284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.785353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.785363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.785382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.785392 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.888697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.888745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.888756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.888775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.888791 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.991708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.991761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.991775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.991795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:36 crc kubenswrapper[4833]: I0127 14:12:36.991810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:36Z","lastTransitionTime":"2026-01-27T14:12:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.095401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.095537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.095556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.095578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.095593 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.193200 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:07:46.407371308 +0000 UTC Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.199117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.199612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.199826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.200124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.200342 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.210878 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.210888 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:37 crc kubenswrapper[4833]: E0127 14:12:37.211121 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.210901 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:37 crc kubenswrapper[4833]: E0127 14:12:37.211325 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:37 crc kubenswrapper[4833]: E0127 14:12:37.211418 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.303230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.303328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.303367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.303396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.303417 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.406275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.406320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.406331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.406346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.406358 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.508765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.508814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.508824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.508842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.508852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.612205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.612249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.612259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.612275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.612287 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.715522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.715581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.715594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.715614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.715626 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.818776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.818821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.818838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.818853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.818862 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.921964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.922027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.922038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.922054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:37 crc kubenswrapper[4833]: I0127 14:12:37.922087 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:37Z","lastTransitionTime":"2026-01-27T14:12:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.024580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.024617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.024797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.025035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.025114 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.128215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.128266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.128284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.128309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.128327 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.193909 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:14:35.565243177 +0000 UTC Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.210588 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:38 crc kubenswrapper[4833]: E0127 14:12:38.210937 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.233205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.233299 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.233318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.233343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.233362 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.336065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.336109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.336121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.336138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.336149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.439055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.439115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.439127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.439147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.439163 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.542711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.542786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.542807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.542834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.542855 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.646592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.646675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.646693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.646722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.646742 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.749550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.749595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.749608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.749626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.749639 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.852811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.852853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.852864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.852878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.852888 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.956096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.956142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.956155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.956174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:38 crc kubenswrapper[4833]: I0127 14:12:38.956189 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:38Z","lastTransitionTime":"2026-01-27T14:12:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.005296 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.028160 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.048246 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.059243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.059298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.059314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.059338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.059356 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.064018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.074596 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.094437 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.110741 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.124619 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.137681 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.150383 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.161679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.161836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.161862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.161881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.161937 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.164967 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.179303 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.194205 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:01:36.009218438 +0000 UTC Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.194344 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.209900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.209981 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.209900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:39 crc kubenswrapper[4833]: E0127 14:12:39.210073 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:39 crc kubenswrapper[4833]: E0127 14:12:39.210165 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:39 crc kubenswrapper[4833]: E0127 14:12:39.210244 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.212154 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.235678 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.255497 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.264394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.264486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.264495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.264512 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.264522 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.271009 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.284114 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.301194 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.329692 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.344341 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.359353 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.366814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.366872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.366889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.366916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.366934 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.372391 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.386797 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.400508 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.412373 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.423640 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.435951 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.446297 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.457374 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.469187 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.471831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.471874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.471890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.471910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.471929 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.482838 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.496087 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.508746 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.521556 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.536440 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.555704 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.575534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.575583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.575599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.575621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.575642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.679816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.679875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.679886 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.679907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.679918 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.783327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.783711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.783778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.783846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.783947 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.886809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.886872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.886886 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.886911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.886927 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.989545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.989598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.989610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.989631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:39 crc kubenswrapper[4833]: I0127 14:12:39.989645 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:39Z","lastTransitionTime":"2026-01-27T14:12:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.092654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.092705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.092719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.092737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.092750 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.194394 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:12:48.952585818 +0000 UTC Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.196665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.196729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.196745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.196769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.196783 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.210078 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:40 crc kubenswrapper[4833]: E0127 14:12:40.210324 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.299423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.299519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.299537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.299556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.299569 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.403128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.403190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.403201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.403226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.403242 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.506262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.506320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.506334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.506361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.506377 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.613612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.613857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.613877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.613901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.613982 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.717562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.717610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.717619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.717637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.717647 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.820939 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.821008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.821021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.821093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.821112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.927601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.927639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.927649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.927666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:40 crc kubenswrapper[4833]: I0127 14:12:40.927680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:40Z","lastTransitionTime":"2026-01-27T14:12:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.031677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.031748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.031759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.031780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.031794 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.135306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.135370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.135384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.135400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.135741 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.195311 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:16:35.130091392 +0000 UTC Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.210579 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.210579 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:41 crc kubenswrapper[4833]: E0127 14:12:41.210746 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:41 crc kubenswrapper[4833]: E0127 14:12:41.210839 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.211148 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:41 crc kubenswrapper[4833]: E0127 14:12:41.211393 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.238843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.239171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.239676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.239816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.239945 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.343346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.343400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.343413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.343432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.343473 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.445716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.445763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.445776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.445792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.445804 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.549327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.549772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.549856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.549927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.549985 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.652361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.652414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.652425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.652439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.652467 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.756171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.756231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.756242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.756257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.756269 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.859962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.860042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.860125 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.860181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.860201 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.963303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.963374 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.963401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.963428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:41 crc kubenswrapper[4833]: I0127 14:12:41.963469 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:41Z","lastTransitionTime":"2026-01-27T14:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.066591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.066637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.066647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.066663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.066673 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.170281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.170343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.170357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.170377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.170390 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.196274 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:10:01.112858992 +0000 UTC Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.210108 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:42 crc kubenswrapper[4833]: E0127 14:12:42.210374 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.272575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.272610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.272621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.272637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.272648 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.375919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.375968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.375979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.376002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.376015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.479411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.479510 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.479536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.479566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.479591 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.583051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.583093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.583107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.583124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.583135 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.685769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.685810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.685822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.685838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.685850 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.788070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.788110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.788122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.788140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.788154 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.890545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.890575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.890584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.890597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.890606 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.994265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.994310 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.994321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.994340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:42 crc kubenswrapper[4833]: I0127 14:12:42.994358 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:42Z","lastTransitionTime":"2026-01-27T14:12:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.058929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.058970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.058981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.059000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.059012 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.070873 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.075037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.075083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.075097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.075119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.075133 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.086907 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.090591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.090628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.090642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.090664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.090689 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.116879 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.122396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.122501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.122518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.122539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.122553 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.135725 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.140834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.140898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.140913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.140934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.140946 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.154790 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.154957 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.157335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.157418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.157435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.157488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.157506 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.196909 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:25:40.973917857 +0000 UTC Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.210386 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.210606 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.210924 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.211102 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.211375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:43 crc kubenswrapper[4833]: E0127 14:12:43.211478 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.260722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.260816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.260828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.260872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.260882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.363557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.363615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.363629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.363647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.363659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.468514 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.468574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.468591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.468608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.468626 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.572290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.572327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.572343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.572367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.572378 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.675880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.675928 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.675939 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.675957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.675969 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.778747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.778782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.778791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.778804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.778815 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.882363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.882425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.882434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.882476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.882491 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.985335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.985392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.985404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.985419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:43 crc kubenswrapper[4833]: I0127 14:12:43.985431 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:43Z","lastTransitionTime":"2026-01-27T14:12:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.088026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.088068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.088080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.088102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.088117 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.191112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.191172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.191186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.191205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.191215 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.197363 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:22:37.913678198 +0000 UTC Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.209741 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:44 crc kubenswrapper[4833]: E0127 14:12:44.209966 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.211133 4833 scope.go:117] "RemoveContainer" containerID="9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.293912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.293947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.293971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.293986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.294006 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.398717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.398762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.398773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.398794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.398805 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.501951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.501997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.502008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.502044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.502057 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.604524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.604565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.604578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.604595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.604605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.706862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.706911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.706922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.706937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.706950 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.810386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.810461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.810476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.810500 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.810513 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.913792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.913845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.913856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.913886 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:44 crc kubenswrapper[4833]: I0127 14:12:44.913904 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:44Z","lastTransitionTime":"2026-01-27T14:12:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.016965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.017025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.017035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.017054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.017067 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.055275 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/1.log" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.058358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.059133 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.077479 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.094891 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.123768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.123830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.123858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.123880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.123894 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.127863 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.146482 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.171202 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.185427 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.197678 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:23:43.845059096 +0000 UTC Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.199523 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.210111 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.210142 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.210262 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:45 crc kubenswrapper[4833]: E0127 14:12:45.210384 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:45 crc kubenswrapper[4833]: E0127 14:12:45.210497 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:45 crc kubenswrapper[4833]: E0127 14:12:45.210796 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.220876 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.226836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.226884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.226893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.226926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.226935 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.238475 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.253044 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.265428 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.277896 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.298352 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.323623 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.330087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.330121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.330129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.330150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.330170 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.348332 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.363507 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.377151 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.395260 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:45Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.433151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.433257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.433274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.433306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.433323 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.536379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.536436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.536495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.536517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.536531 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.639214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.639267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.639282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.639300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.639311 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.742173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.742256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.742277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.742306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.742325 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.844946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.844995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.845004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.845054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.845066 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.948539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.948586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.948598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.948619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:45 crc kubenswrapper[4833]: I0127 14:12:45.948634 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:45Z","lastTransitionTime":"2026-01-27T14:12:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.051218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.051741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.051893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.052040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.052169 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.064844 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/2.log" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.065764 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/1.log" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.069038 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25" exitCode=1 Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.069107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.069173 4833 scope.go:117] "RemoveContainer" containerID="9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.069703 4833 scope.go:117] "RemoveContainer" containerID="464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25" Jan 27 14:12:46 crc kubenswrapper[4833]: E0127 14:12:46.069880 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.085410 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.098722 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.110153 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.134994 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.155466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.155521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.155534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.155553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.155568 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.162500 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.182513 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.197837 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:45:23.878604615 +0000 UTC Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.202581 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.210197 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:46 crc kubenswrapper[4833]: E0127 14:12:46.210344 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.215222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.228080 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.240853 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.253208 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.267768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.267830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.267843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.267863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.267877 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.273092 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.293028 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a3e0f8e5ccb46c445ec99a6799f8c66abd99d9bdfc1dc52e7b77e7503eee8a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:30Z\\\",\\\"message\\\":\\\"rk-operator/iptables-alerter-4ln5h\\\\nI0127 14:12:30.361707 6329 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-k84ff in node crc\\\\nI0127 14:12:30.361717 6329 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nF0127 14:12:30.361726 6329 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:30Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:12:30.361729 6329 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0127 14:12:30.361739 6329 obj_retry.go:365] Adding new object: *v1.Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.307932 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.321548 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.342630 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.358988 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.370476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.370513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.370525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.370539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.370548 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.381542 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.473462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.473524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.473545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.473567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.473581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.576651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.576715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.576726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.576742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.576753 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.679340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.679385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.679397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.679413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.679424 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.782853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.782889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.782897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.782912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.782924 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.885983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.886037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.886048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.886063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.886073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.988808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.988866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.988876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.988906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:46 crc kubenswrapper[4833]: I0127 14:12:46.988921 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:46Z","lastTransitionTime":"2026-01-27T14:12:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.075328 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/2.log" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.082298 4833 scope.go:117] "RemoveContainer" containerID="464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25" Jan 27 14:12:47 crc kubenswrapper[4833]: E0127 14:12:47.082495 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.091896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.092318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.092414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.092550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.092616 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.101070 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.117895 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.132798 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.145430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.163307 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.184500 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.195862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.196216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.196340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.196490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.196583 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.198975 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:28:23.380237293 +0000 UTC Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.201616 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.210779 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:47 crc kubenswrapper[4833]: E0127 14:12:47.210900 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.210895 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:47 crc kubenswrapper[4833]: E0127 14:12:47.210980 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.210967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:47 crc kubenswrapper[4833]: E0127 14:12:47.211186 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.218054 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.239389 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.251914 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.265432 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.279668 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.299169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.299215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.299227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.299137 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.299245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.299470 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.323529 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.349504 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.363729 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.378105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.391063 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.403543 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.403590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.403602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.403621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.403632 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.506851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.506906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.506919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.506940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.506953 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.609721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.609776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.609791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.609812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.609825 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.713105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.713144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.713155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.713176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.713188 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.816317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.816389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.816403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.816426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.816458 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.920212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.920259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.920271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.920289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:47 crc kubenswrapper[4833]: I0127 14:12:47.920301 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:47Z","lastTransitionTime":"2026-01-27T14:12:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.022434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.022528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.022547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.022573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.022589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.125610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.125652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.125662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.125683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.125695 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.186976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:48 crc kubenswrapper[4833]: E0127 14:12:48.187194 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:48 crc kubenswrapper[4833]: E0127 14:12:48.187284 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:20.187264532 +0000 UTC m=+101.838588934 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.199978 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:50:36.482681915 +0000 UTC Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.210417 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:48 crc kubenswrapper[4833]: E0127 14:12:48.210644 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.229340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.229401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.229413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.229438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.229465 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.332122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.332166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.332184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.332204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.332216 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.435076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.435116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.435124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.435139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.435150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.537761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.537823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.537836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.537858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.537875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.640633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.640684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.640695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.640712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.640724 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.743593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.743709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.743719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.743741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.743753 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.846719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.846778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.846792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.846815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.846833 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.949830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.949929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.949941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.949963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:48 crc kubenswrapper[4833]: I0127 14:12:48.949978 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:48Z","lastTransitionTime":"2026-01-27T14:12:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.053019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.053057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.053065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.053078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.053088 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.154771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.154820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.154831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.154849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.154885 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.200414 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:02:11.617229274 +0000 UTC Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.209701 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.209714 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:49 crc kubenswrapper[4833]: E0127 14:12:49.209849 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.209881 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:49 crc kubenswrapper[4833]: E0127 14:12:49.209941 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:49 crc kubenswrapper[4833]: E0127 14:12:49.209988 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.220383 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.231856 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.244012 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.261297 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.262134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.262182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.262200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.262225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.262244 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.279094 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.291038 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.302661 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.313611 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.331099 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.353162 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.364619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.364645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.364659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.364675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.364685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.372523 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.385466 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.395913 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.406680 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.420710 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.437964 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.450167 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.464101 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.467111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.467140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.467153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.467169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.467180 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.569357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.569393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.569407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.569425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.569439 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.672429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.672509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.672523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.672545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.672559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.775644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.775690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.775704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.775722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.775739 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.878894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.878949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.878960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.878981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.878993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.981634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.981699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.981714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.981730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:49 crc kubenswrapper[4833]: I0127 14:12:49.981742 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:49Z","lastTransitionTime":"2026-01-27T14:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.084327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.084378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.084395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.084412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.084427 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.188424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.188513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.188532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.188585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.188604 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.200963 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:28:20.381798647 +0000 UTC Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.210391 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:50 crc kubenswrapper[4833]: E0127 14:12:50.210595 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.291555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.291603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.291613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.291629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.291640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.394103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.394151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.394163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.394180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.394193 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.497060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.497111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.497122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.497138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.497149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.599546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.599590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.599601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.599617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.599632 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.701963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.702012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.702026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.702043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.702055 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.804176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.804235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.804246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.804264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.804278 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.907908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.907949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.907958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.907975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:50 crc kubenswrapper[4833]: I0127 14:12:50.907984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:50Z","lastTransitionTime":"2026-01-27T14:12:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.010710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.010781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.010796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.010817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.010829 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.113156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.113189 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.113199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.113211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.113222 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.202179 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:28:35.065981013 +0000 UTC Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.211477 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:51 crc kubenswrapper[4833]: E0127 14:12:51.211679 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.211944 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:51 crc kubenswrapper[4833]: E0127 14:12:51.212010 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.212221 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:51 crc kubenswrapper[4833]: E0127 14:12:51.212372 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.215076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.215149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.215165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.215181 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.215193 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.317839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.317874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.317883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.317898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.317907 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.420196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.420255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.420268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.420307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.420322 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.523390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.523471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.523484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.523503 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.523518 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.625989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.626046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.626055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.626069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.626079 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.728397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.728465 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.728479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.728496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.728509 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.830402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.830435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.830456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.830472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.830501 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.933622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.933662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.933673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.933693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:51 crc kubenswrapper[4833]: I0127 14:12:51.933705 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:51Z","lastTransitionTime":"2026-01-27T14:12:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.035729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.035769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.035781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.035797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.035808 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.138820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.138860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.138873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.138890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.138902 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.202797 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:27:20.712129918 +0000 UTC Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.210321 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:52 crc kubenswrapper[4833]: E0127 14:12:52.210580 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.241291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.241337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.241353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.241378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.241395 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.344191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.344233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.344243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.344256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.344268 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.447145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.447195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.447205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.447220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.447231 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.549739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.549796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.549806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.549830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.549867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.653703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.653745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.653757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.653775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.653808 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.756207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.756250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.756260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.756273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.756285 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.859735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.859782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.859792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.859813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.859824 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.962474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.962531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.962548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.962569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:52 crc kubenswrapper[4833]: I0127 14:12:52.962581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:52Z","lastTransitionTime":"2026-01-27T14:12:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.066472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.066983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.066997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.067023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.067040 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.169896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.169973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.169983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.170000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.170011 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.203376 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:34:03.385070851 +0000 UTC Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.210058 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.210188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.210058 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.210425 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.210371 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.210191 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.272952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.273037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.273046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.273067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.273083 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.375903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.375948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.375959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.375987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.376000 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.478488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.478568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.478583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.478615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.478633 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.544589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.544651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.544662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.544680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.544692 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.561420 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.565849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.565899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.565911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.565935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.565947 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.580435 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.586590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.586627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.586638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.586656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.586669 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.602879 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.608468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.608543 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.608556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.608579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.608593 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.624254 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.634607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.634665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.634684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.634708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.634725 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.648397 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:53 crc kubenswrapper[4833]: E0127 14:12:53.648576 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.653175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.653211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.653222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.653240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.653252 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.756584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.756633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.756646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.756663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.756675 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.859302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.859345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.859354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.859368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.859393 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.962412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.962461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.962474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.962492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:53 crc kubenswrapper[4833]: I0127 14:12:53.962502 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:53Z","lastTransitionTime":"2026-01-27T14:12:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.065077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.065117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.065166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.065182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.065194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.167362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.167410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.167421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.167437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.167484 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.204056 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:45:56.546281042 +0000 UTC Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.210403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:54 crc kubenswrapper[4833]: E0127 14:12:54.210575 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.271301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.271371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.271385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.271413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.271430 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.374921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.374984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.374994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.375013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.375047 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.478777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.478843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.478857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.478879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.478893 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.581552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.581634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.581646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.581660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.581669 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.684311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.684363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.684377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.684395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.684408 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.787434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.787515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.787529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.787546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.787559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.891138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.891205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.891225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.891257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.891279 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.995171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.995223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.995235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.995253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:54 crc kubenswrapper[4833]: I0127 14:12:54.995266 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:54Z","lastTransitionTime":"2026-01-27T14:12:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.098747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.098793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.098804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.098821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.098833 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.201700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.201741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.201753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.201771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.201785 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.204211 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:02:32.265771294 +0000 UTC Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.209746 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:55 crc kubenswrapper[4833]: E0127 14:12:55.209913 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.210181 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:55 crc kubenswrapper[4833]: E0127 14:12:55.210284 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.210548 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:55 crc kubenswrapper[4833]: E0127 14:12:55.210679 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.308278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.308344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.308364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.308397 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.308419 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.411481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.411563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.411577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.411598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.411609 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.514045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.514075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.514083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.514095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.514104 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.617903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.617967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.617988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.618011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.618032 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.721227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.721270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.721280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.721295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.721305 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.824678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.824748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.824765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.824787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.824803 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.927483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.927524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.927533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.927547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:55 crc kubenswrapper[4833]: I0127 14:12:55.927562 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:55Z","lastTransitionTime":"2026-01-27T14:12:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.030205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.030244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.030253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.030268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.030277 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.133395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.133462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.133473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.133493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.133508 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.205199 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:51:40.271628256 +0000 UTC Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.209634 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:56 crc kubenswrapper[4833]: E0127 14:12:56.209830 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.224885 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.236016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.236083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.236093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.236151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.236161 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.339534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.339590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.339602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.339619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.339629 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.442194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.442255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.442264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.442279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.442288 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.545984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.546044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.546061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.546082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.546101 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.648472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.648511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.648520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.648557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.648568 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.751237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.751291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.751307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.751324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.751336 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.854363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.854413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.854426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.854470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.854484 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.957809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.957873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.957889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.957936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:56 crc kubenswrapper[4833]: I0127 14:12:56.957960 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:56Z","lastTransitionTime":"2026-01-27T14:12:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.060640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.060713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.060732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.060755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.060768 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.163904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.163997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.164016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.164044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.164092 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.205563 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:40:55.155837425 +0000 UTC Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.210100 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.210139 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.210120 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:57 crc kubenswrapper[4833]: E0127 14:12:57.210275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:57 crc kubenswrapper[4833]: E0127 14:12:57.210354 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:57 crc kubenswrapper[4833]: E0127 14:12:57.210483 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.266564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.266614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.266626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.266646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.266660 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.369990 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.370083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.370109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.370158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.370187 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.473410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.473505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.473517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.473537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.473551 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.593861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.593944 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.593958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.593981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.593994 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.697474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.697538 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.697551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.697574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.697589 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.800806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.801534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.801728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.801932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.802114 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.905222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.905273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.905286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.905302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:57 crc kubenswrapper[4833]: I0127 14:12:57.905314 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:57Z","lastTransitionTime":"2026-01-27T14:12:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.008665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.008700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.008713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.008728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.008740 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.111582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.111622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.111637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.111664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.111688 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.206152 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:36:20.003123782 +0000 UTC Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.210840 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:12:58 crc kubenswrapper[4833]: E0127 14:12:58.211009 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.212379 4833 scope.go:117] "RemoveContainer" containerID="464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25" Jan 27 14:12:58 crc kubenswrapper[4833]: E0127 14:12:58.212854 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.215029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.215066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.215076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.215094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.215104 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.318477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.318555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.318567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.318588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.318603 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.421859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.421912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.421924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.421952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.421975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.525049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.525091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.525102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.525118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.525128 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.628068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.628126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.628141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.628158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.628171 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.731130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.731193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.731213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.731241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.731261 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.834518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.834589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.834606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.834626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.834642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.937621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.937691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.937702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.937724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:58 crc kubenswrapper[4833]: I0127 14:12:58.937737 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:58Z","lastTransitionTime":"2026-01-27T14:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.040498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.040535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.040544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.040558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.040569 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.142522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.142551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.142559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.142572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.142581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.206762 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:39:27.560346976 +0000 UTC Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.210749 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.210894 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:12:59 crc kubenswrapper[4833]: E0127 14:12:59.211625 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.211722 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:12:59 crc kubenswrapper[4833]: E0127 14:12:59.211919 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:12:59 crc kubenswrapper[4833]: E0127 14:12:59.212207 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.229692 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.245749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.245780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.245788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.245801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.245810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.248568 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.274036 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.290064 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.306271 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.326546 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.343677 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.347406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.347479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.347493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.347515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.347528 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.364755 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.380020 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.392176 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.406255 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.423197 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.438700 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.451097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.451139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.451150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.451167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.451179 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.454550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.471920 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.487804 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.501243 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.510672 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.522124 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:12:59Z is after 2025-08-24T17:21:41Z" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.553464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.553509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.553519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.553540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.553550 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.655860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.655906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.655918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.655936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.655948 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.758967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.759021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.759034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.759054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.759066 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.862320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.862381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.862413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.862432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.862464 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.970052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.970532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.970915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.976159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:12:59 crc kubenswrapper[4833]: I0127 14:12:59.976203 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:12:59Z","lastTransitionTime":"2026-01-27T14:12:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.079035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.079081 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.079093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.079109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.079122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.181369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.181399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.181407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.181418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.181426 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.207163 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:38:34.507023619 +0000 UTC Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.210583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:00 crc kubenswrapper[4833]: E0127 14:13:00.210763 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.284301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.284357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.284368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.284383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.284392 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.386378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.386494 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.386528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.386542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.386623 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.489424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.489481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.489492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.489506 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.489520 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.592274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.592314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.592323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.592336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.592346 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.695074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.695130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.695141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.695155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.695165 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.797840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.797886 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.797903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.797922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.797936 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.900955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.901016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.901037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.901063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:00 crc kubenswrapper[4833]: I0127 14:13:00.901081 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:00Z","lastTransitionTime":"2026-01-27T14:13:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.004034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.004124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.004142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.004190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.004205 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.107991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.108053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.108073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.108096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.108114 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.207921 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:38:58.468588568 +0000 UTC Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.209571 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.209725 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:01 crc kubenswrapper[4833]: E0127 14:13:01.209909 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.209735 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:01 crc kubenswrapper[4833]: E0127 14:13:01.210107 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:01 crc kubenswrapper[4833]: E0127 14:13:01.210010 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.211507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.211549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.211564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.211582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.211596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.314526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.314885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.314973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.315075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.315677 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.418357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.418391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.418399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.418413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.418421 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.521296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.521337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.521368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.521382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.521391 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.624259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.624316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.624331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.624348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.624360 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.727260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.727311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.727323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.727339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.727359 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.830736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.830820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.830841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.830872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.830891 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.933102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.933140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.933151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.933166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:01 crc kubenswrapper[4833]: I0127 14:13:01.933176 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:01Z","lastTransitionTime":"2026-01-27T14:13:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.035855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.035913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.035927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.035946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.035957 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.138026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.138057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.138065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.138078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.138087 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.209011 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:10:15.733404379 +0000 UTC Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.210296 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:02 crc kubenswrapper[4833]: E0127 14:13:02.210507 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.240322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.240722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.240824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.240912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.240984 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.343357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.343399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.343409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.343424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.343438 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.445530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.445573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.445582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.445595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.445604 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.548486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.548544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.548554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.548567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.548578 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.652282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.652365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.652380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.652421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.652435 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.755706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.755753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.755764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.755777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.755788 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.858306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.858358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.858370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.858384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.858392 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.960686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.961077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.961166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.961251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:02 crc kubenswrapper[4833]: I0127 14:13:02.961332 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:02Z","lastTransitionTime":"2026-01-27T14:13:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.063969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.064029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.064041 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.064057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.064068 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.168417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.168498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.168513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.168532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.168545 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.210013 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:00:45.640868385 +0000 UTC Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.210153 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.210199 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.210244 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:03 crc kubenswrapper[4833]: E0127 14:13:03.210304 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:03 crc kubenswrapper[4833]: E0127 14:13:03.210378 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:03 crc kubenswrapper[4833]: E0127 14:13:03.210498 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.272038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.272083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.272094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.272115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.272124 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.375261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.375297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.375311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.375329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.375340 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.477962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.478004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.478014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.478028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.478038 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.581640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.581671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.581679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.581690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.581701 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.684632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.685001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.685093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.685189 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.685284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.788297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.788347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.788359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.788377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.788388 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.890522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.890572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.890585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.890600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.890611 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.993762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.993832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.993850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.993874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:03 crc kubenswrapper[4833]: I0127 14:13:03.993895 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:03Z","lastTransitionTime":"2026-01-27T14:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.037206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.037592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.037665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.037736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.037796 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.053075 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.057763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.057906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.057971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.058040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.058101 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.069791 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.074293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.074585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.074677 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.074756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.074841 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.087183 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.091859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.091920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.091938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.091974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.091992 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.111553 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.115916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.116020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.116099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.116168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.116227 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.129687 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.129811 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.131298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.131329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.131339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.131357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.131370 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.210430 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:58:10.441610782 +0000 UTC Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.211661 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:04 crc kubenswrapper[4833]: E0127 14:13:04.211988 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.233562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.233933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.234038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.234115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.234189 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.337261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.337693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.337788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.337877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.337964 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.440138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.440206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.440224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.440303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.440326 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.542269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.542601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.545648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.545964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.546068 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.648930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.648972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.648983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.648996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.649005 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.751434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.751794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.751915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.752019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.752139 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.854914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.855228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.855354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.855482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.855563 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.958528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.958580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.958595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.958615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:04 crc kubenswrapper[4833]: I0127 14:13:04.958628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:04Z","lastTransitionTime":"2026-01-27T14:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.060985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.061029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.061040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.061056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.061066 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.072642 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.072813 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:09.072781337 +0000 UTC m=+150.724105749 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.163273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.163339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.163347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.163379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.163389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.174043 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.174089 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.174113 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.174135 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174222 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174221 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174276 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:14:09.174263335 +0000 UTC m=+150.825587737 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174277 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174304 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174319 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174291 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:14:09.174285216 +0000 UTC m=+150.825609618 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174372 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:14:09.174360088 +0000 UTC m=+150.825684490 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174558 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174611 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174636 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.174738 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:14:09.174706848 +0000 UTC m=+150.826031290 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.210237 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.210291 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.210370 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.210260 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.210577 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:10:16.30202458 +0000 UTC Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.210746 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:05 crc kubenswrapper[4833]: E0127 14:13:05.210850 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.265741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.265796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.265812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.265831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.265846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.368625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.368666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.368674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.368687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.368699 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.471790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.471827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.471838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.471856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.471876 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.574576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.574625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.574638 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.574655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.574667 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.677391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.677466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.677477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.677493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.677504 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.779968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.780041 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.780054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.780076 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.780090 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.882598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.882649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.882660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.882679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.882693 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.984600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.984641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.984650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.984662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:05 crc kubenswrapper[4833]: I0127 14:13:05.984671 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:05Z","lastTransitionTime":"2026-01-27T14:13:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.087431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.087603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.087621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.087639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.087647 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.190193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.190236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.190252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.190270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.190283 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.210723 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:06 crc kubenswrapper[4833]: E0127 14:13:06.210953 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.211045 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:55:20.658382356 +0000 UTC Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.294317 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.294378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.294388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.294411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.294434 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.397524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.397597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.397621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.397649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.397668 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.500805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.500870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.500884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.500902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.501263 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.604407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.604469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.604480 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.604496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.604508 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.709122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.709167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.709176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.709191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.709200 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.813855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.813884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.813892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.813904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.813913 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.917276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.917343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.917361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.917392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:06 crc kubenswrapper[4833]: I0127 14:13:06.917417 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:06Z","lastTransitionTime":"2026-01-27T14:13:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.019912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.019951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.019960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.019973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.019983 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.122046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.122121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.122153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.122204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.122220 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.210514 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.210664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:07 crc kubenswrapper[4833]: E0127 14:13:07.210869 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.211332 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.211398 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:28:13.065119792 +0000 UTC Jan 27 14:13:07 crc kubenswrapper[4833]: E0127 14:13:07.211554 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:07 crc kubenswrapper[4833]: E0127 14:13:07.211624 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.224519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.224877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.224892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.224907 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.224917 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.327755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.327780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.327788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.327800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.327808 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.430580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.430992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.431814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.432223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.432327 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.535021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.535088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.535097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.535111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.535120 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.637524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.637564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.637574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.637587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.637596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.740159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.740202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.740214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.740229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.740241 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.842542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.842587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.842599 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.842678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.842689 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.945511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.945558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.945574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.945591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:07 crc kubenswrapper[4833]: I0127 14:13:07.945604 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:07Z","lastTransitionTime":"2026-01-27T14:13:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.048193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.048258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.048270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.048282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.048292 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.150887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.150958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.150980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.151008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.151027 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.209968 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:08 crc kubenswrapper[4833]: E0127 14:13:08.210110 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.212126 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:33:18.813069512 +0000 UTC Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.254222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.254277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.254290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.254307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.254322 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.357099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.357143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.357156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.357170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.357184 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.460757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.460820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.460844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.460864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.460878 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.564609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.564671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.564685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.564704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.564719 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.667669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.667721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.667731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.667745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.667755 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.770368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.770488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.770505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.770522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.770534 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.873875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.873935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.873947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.873965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.873979 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.976635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.976683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.976693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.976710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:08 crc kubenswrapper[4833]: I0127 14:13:08.976719 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:08Z","lastTransitionTime":"2026-01-27T14:13:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.079787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.079848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.079860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.079895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.079910 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.182381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.182425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.182435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.182483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.182495 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.209648 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:09 crc kubenswrapper[4833]: E0127 14:13:09.209806 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.209836 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.209805 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:09 crc kubenswrapper[4833]: E0127 14:13:09.210035 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:09 crc kubenswrapper[4833]: E0127 14:13:09.210206 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.212263 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:23:16.282580176 +0000 UTC Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.226301 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.241315 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.260948 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.283841 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.285085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.285123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.285133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.285147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.285157 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.310389 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.326064 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.339311 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.350841 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.364251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.378326 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.387778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.388140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.388292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.388394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.388497 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.395347 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.409389 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.421285 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.434032 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.444735 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.455890 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.469896 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.484315 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.491286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.491328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.491337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.491367 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.491377 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.502166 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.594379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.594419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.594435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.594483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.594499 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.696513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.696892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.696978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.697061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.697150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.799910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.800251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.800400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.800549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.800649 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.903262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.903654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.903801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.903917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:09 crc kubenswrapper[4833]: I0127 14:13:09.904017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:09Z","lastTransitionTime":"2026-01-27T14:13:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.006615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.006952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.007031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.007112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.007181 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.110283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.110764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.110910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.111025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.111134 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.209656 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:10 crc kubenswrapper[4833]: E0127 14:13:10.210051 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.210336 4833 scope.go:117] "RemoveContainer" containerID="464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.212371 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:04:47.022705728 +0000 UTC Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.214204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.214247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.214259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.214277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.214290 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.317966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.318528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.318541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.318559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.318572 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.421269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.421298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.421309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.421325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.421336 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.525101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.525173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.525190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.525211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.525230 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.627752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.627808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.627819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.627836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.627847 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.733687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.733719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.733728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.733742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.733751 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.836742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.836778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.836789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.836805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.836817 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.939119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.939155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.939165 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.939180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:10 crc kubenswrapper[4833]: I0127 14:13:10.939192 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:10Z","lastTransitionTime":"2026-01-27T14:13:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.042179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.042226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.042237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.042254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.042265 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.145271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.145312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.145321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.145336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.145347 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.165108 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/2.log" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.167509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.168085 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.169196 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/0.log" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.169246 4833 generic.go:334] "Generic (PLEG): container finished" podID="b7a7c135-ca95-4e75-b823-d1e45101a761" containerID="378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e" exitCode=1 Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.169277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerDied","Data":"378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.169716 4833 scope.go:117] "RemoveContainer" containerID="378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.187644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.204021 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.209825 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.209882 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.209827 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:11 crc kubenswrapper[4833]: E0127 14:13:11.209928 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:11 crc kubenswrapper[4833]: E0127 14:13:11.209991 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:11 crc kubenswrapper[4833]: E0127 14:13:11.210056 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.212790 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:44:24.177035205 +0000 UTC Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.226586 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.237550 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.259547 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.262192 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.262226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.262239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.262257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.262271 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.275530 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.303243 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.319512 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.330813 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.342590 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.352430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.361380 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.365197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.365232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.365246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.365262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.365274 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.371205 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.382875 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.391217 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.402584 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.421403 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.434061 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.445932 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.459093 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.467496 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.467536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.467547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.467569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.467582 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.474627 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.486251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.498335 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.510733 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.523070 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.548695 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.559503 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.568557 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.569564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.569609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.569632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.569652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.569675 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.576741 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.596492 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.610295 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.624133 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.636536 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.653191 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.665261 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.671748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.671772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.671781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.671794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.671803 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.678966 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.693127 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.706063 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:11Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.774127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.774188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.774200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.774220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.774233 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.877277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.877441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.877498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.877534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.877561 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.980775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.980862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.980888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.980919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:11 crc kubenswrapper[4833]: I0127 14:13:11.980945 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:11Z","lastTransitionTime":"2026-01-27T14:13:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.084382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.084504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.084525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.084551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.084573 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.174716 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/3.log" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.175405 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/2.log" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.178242 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" exitCode=1 Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.178308 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.178344 4833 scope.go:117] "RemoveContainer" containerID="464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.179216 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:13:12 crc kubenswrapper[4833]: E0127 14:13:12.179405 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.180058 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/0.log" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.180124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerStarted","Data":"67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.186290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.186327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.186335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.186347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.186356 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.193086 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.206097 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.209543 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:12 crc kubenswrapper[4833]: E0127 14:13:12.209674 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.213727 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:26:47.800342934 +0000 UTC Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.219101 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.234127 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.250599 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.262333 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.278773 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.289241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.289287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.289298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.289313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.289326 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.292008 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.304201 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.315543 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.328721 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.343556 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.356099 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.373489 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.393000 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"47943 6899 services_controller.go:360] Finished syncing service controller-manager on namespace openshift-controller-manager for network=default : 1.939217ms\\\\nI0127 14:13:11.147908 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.57µs\\\\nI0127 14:13:11.148035 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:13:11.148116 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller\\\\nI0127 14:13:11.148160 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:13:11.148197 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:13:11.148227 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 14:13:11.148281 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 97.813µs\\\\nI0127 14:13:11.148339 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller\\\\nI0127 14:13:11.148389 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 51.062µs\\\\nF0127 14:13:11.148306 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.393379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.393406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.393416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.393429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.393455 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.412046 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.425927 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.439992 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.456894 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.479217 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.493575 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.496812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.496845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.496857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.496872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.496885 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.506647 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.518207 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.532460 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.546703 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.558128 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.566462 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.578271 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.592099 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.598653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.598675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.598684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.598700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.598710 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.606662 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.621258 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.632412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.646856 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.664378 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.677291 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.688335 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.701486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.701532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.701542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.701555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.701566 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.705332 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.725075 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464710f01673cac2438becdf5a50445e534beef639199f5b5ca4812509a2de25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:12:45Z\\\",\\\"message\\\":\\\"nshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI0127 14:12:45.451502 6575 services_controller.go:443] Built service openshift-marketplace/redhat-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.138\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 14:12:45.451540 6575 services_controller.go:444] Built service openshift-marketplace/redhat-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 14:12:45.450866 6575 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451582 6575 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-npb46\\\\nI0127 14:12:45.451531 6575 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Ro\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"47943 6899 services_controller.go:360] Finished syncing service controller-manager on namespace openshift-controller-manager for network=default : 1.939217ms\\\\nI0127 14:13:11.147908 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.57µs\\\\nI0127 14:13:11.148035 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:13:11.148116 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller\\\\nI0127 14:13:11.148160 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:13:11.148197 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:13:11.148227 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 14:13:11.148281 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 97.813µs\\\\nI0127 14:13:11.148339 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller\\\\nI0127 14:13:11.148389 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 51.062µs\\\\nF0127 14:13:11.148306 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:12Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.804421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.804836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.804851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.804866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.804877 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.907828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.907878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.907891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.907905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:12 crc kubenswrapper[4833]: I0127 14:13:12.907916 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:12Z","lastTransitionTime":"2026-01-27T14:13:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.011529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.011581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.011592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.011608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.011618 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.114246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.114289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.114302 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.114316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.114326 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.187109 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/3.log" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.192129 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:13:13 crc kubenswrapper[4833]: E0127 14:13:13.192393 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.209941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:13 crc kubenswrapper[4833]: E0127 14:13:13.210093 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.210245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:13 crc kubenswrapper[4833]: E0127 14:13:13.210297 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.210420 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:13 crc kubenswrapper[4833]: E0127 14:13:13.210504 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.214241 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:30:30.214609155 +0000 UTC Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.216216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.216249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.216260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.216275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.216286 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.219236 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.231212 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.244162 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.254489 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.266403 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.278822 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.288847 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.297422 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.308693 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.318155 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.319772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.319826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.319839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.319858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.319893 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.329185 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.344826 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.356617 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.368778 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.382102 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.396630 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.414524 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.421870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.421948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.421966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.421983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.421999 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.435235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.454982 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"47943 6899 services_controller.go:360] Finished syncing service controller-manager on namespace openshift-controller-manager for network=default : 1.939217ms\\\\nI0127 14:13:11.147908 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.57µs\\\\nI0127 14:13:11.148035 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:13:11.148116 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller\\\\nI0127 14:13:11.148160 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:13:11.148197 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:13:11.148227 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 14:13:11.148281 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 97.813µs\\\\nI0127 14:13:11.148339 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller\\\\nI0127 14:13:11.148389 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 51.062µs\\\\nF0127 14:13:11.148306 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:13Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.524936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.525002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.525018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.525045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.525062 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.628104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.628222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.628236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.628259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.628271 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.732114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.732170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.732182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.732204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.732219 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.835235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.835314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.835328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.835347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.835362 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.938571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.938623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.938660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.938678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:13 crc kubenswrapper[4833]: I0127 14:13:13.938691 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:13Z","lastTransitionTime":"2026-01-27T14:13:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.041033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.041103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.041115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.041139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.041152 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.149602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.149768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.149799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.149827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.149850 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.209900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.210027 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.214973 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:03:59.037664933 +0000 UTC Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.252377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.252418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.252428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.252467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.252477 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.328926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.328966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.328976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.328992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.329004 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.340801 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.345583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.345641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.345651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.345665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.345675 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.359561 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.363821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.363887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.363910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.364004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.364023 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.380267 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.384602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.384634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.384643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.384656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.384667 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.395323 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.398491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.398516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.398523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.398536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.398545 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.411866 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:14 crc kubenswrapper[4833]: E0127 14:13:14.411980 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.413737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.413774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.413784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.413802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.413813 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.516889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.516964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.516988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.517019 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.517042 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.619670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.619704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.619714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.619729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.619742 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.722618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.722675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.722688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.722705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.722718 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.825784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.825848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.825859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.825878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.825891 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.929573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.929652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.929672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.929696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:14 crc kubenswrapper[4833]: I0127 14:13:14.929716 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:14Z","lastTransitionTime":"2026-01-27T14:13:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.033008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.033078 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.033095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.033124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.033149 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.136417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.136491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.136509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.136529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.136541 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.209817 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.209852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.209857 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:15 crc kubenswrapper[4833]: E0127 14:13:15.210097 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:15 crc kubenswrapper[4833]: E0127 14:13:15.209990 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:15 crc kubenswrapper[4833]: E0127 14:13:15.210215 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.215099 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:14:40.947906081 +0000 UTC Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.240375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.240485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.240513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.240545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.240680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.344272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.344322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.344334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.344351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.344367 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.447474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.447537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.447558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.447674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.447695 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.550968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.551028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.551045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.551068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.551086 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.654938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.655128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.655171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.655247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.655269 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.758722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.758789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.758806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.758830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.758846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.861522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.861572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.861588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.861610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.861625 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.966208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.966278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.966300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.966329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:15 crc kubenswrapper[4833]: I0127 14:13:15.966347 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:15Z","lastTransitionTime":"2026-01-27T14:13:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.069276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.069340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.069355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.069377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.069393 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.175046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.175106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.175122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.175143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.175160 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.210385 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:16 crc kubenswrapper[4833]: E0127 14:13:16.210628 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.215219 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:22:02.082093742 +0000 UTC Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.278215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.278262 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.278276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.278296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.278311 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.382009 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.382390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.382498 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.382597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.382844 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.486756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.487091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.487167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.487235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.487305 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.590693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.591070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.591142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.591218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.591281 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.694481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.694545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.694571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.694617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.694640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.797646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.797706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.797723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.797750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.797767 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.901395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.901472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.901492 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.901515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:16 crc kubenswrapper[4833]: I0127 14:13:16.901531 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:16Z","lastTransitionTime":"2026-01-27T14:13:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.006077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.006113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.006123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.006135 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.006144 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.108743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.108821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.108841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.108862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.108875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.209531 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.209532 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.209609 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:17 crc kubenswrapper[4833]: E0127 14:13:17.209716 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:17 crc kubenswrapper[4833]: E0127 14:13:17.209957 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:17 crc kubenswrapper[4833]: E0127 14:13:17.210068 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.211371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.211411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.211423 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.211455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.211468 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.216226 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:50:40.908965062 +0000 UTC Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.313642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.313712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.313725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.313743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.313757 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.417064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.417204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.417225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.417250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.417269 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.519816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.519862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.519875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.519894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.519908 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.623193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.623255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.623268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.623290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.623305 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.726759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.726802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.726811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.726825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.726835 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.829631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.829686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.829696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.829708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.829719 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.932092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.932168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.932183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.932205 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:17 crc kubenswrapper[4833]: I0127 14:13:17.932222 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:17Z","lastTransitionTime":"2026-01-27T14:13:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.036167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.036258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.036280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.036312 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.036337 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.139655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.139740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.139765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.139798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.139822 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.209791 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:18 crc kubenswrapper[4833]: E0127 14:13:18.210031 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.216966 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:01:13.749272973 +0000 UTC Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.242375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.242431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.242490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.242516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.242535 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.344975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.345023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.345035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.345053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.345066 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.448104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.448136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.448147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.448163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.448172 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.551012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.551051 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.551064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.551082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.551098 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.653366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.653399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.653411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.653427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.653436 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.756963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.757016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.757035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.757057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.757076 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.860088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.863394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.863476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.863509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.863523 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.967436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.967918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.968038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.968164 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:18 crc kubenswrapper[4833]: I0127 14:13:18.968295 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:18Z","lastTransitionTime":"2026-01-27T14:13:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.071522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.071585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.071605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.071630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.071647 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.175083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.175513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.175622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.175707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.175850 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.209841 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:19 crc kubenswrapper[4833]: E0127 14:13:19.209972 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.210047 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:19 crc kubenswrapper[4833]: E0127 14:13:19.210114 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.210281 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:19 crc kubenswrapper[4833]: E0127 14:13:19.210343 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.217807 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:13:37.277137 +0000 UTC Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.230331 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.246367 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.263585 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.275207 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.277787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.277816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.277827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.277841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.277860 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.287554 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.301377 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.313216 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.327711 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.350651 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"47943 6899 services_controller.go:360] Finished syncing service controller-manager on namespace openshift-controller-manager for network=default : 1.939217ms\\\\nI0127 14:13:11.147908 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.57µs\\\\nI0127 14:13:11.148035 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:13:11.148116 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller\\\\nI0127 14:13:11.148160 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:13:11.148197 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:13:11.148227 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 14:13:11.148281 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 97.813µs\\\\nI0127 14:13:11.148339 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller\\\\nI0127 14:13:11.148389 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 51.062µs\\\\nF0127 14:13:11.148306 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.363255 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.374464 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.383167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.383204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.383214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.383231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.383246 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.390083 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.401122 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.424498 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.437994 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.449874 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.465475 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.482805 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.487598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.487645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.487659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.487680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.487695 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.501938 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:19Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.590320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.590394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.590414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.590478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.590500 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.693789 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.693876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.693902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.693933 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.693958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.797867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.797953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.797976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.798008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.798027 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.901235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.901311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.901329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.901355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:19 crc kubenswrapper[4833]: I0127 14:13:19.901374 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:19Z","lastTransitionTime":"2026-01-27T14:13:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.004126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.004184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.004198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.004221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.004238 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.107006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.107037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.107046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.107073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.107083 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.209552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.209768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.209798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.209831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.209852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.209868 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: E0127 14:13:20.210172 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.218318 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:01:55.635200284 +0000 UTC Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.236792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:20 crc kubenswrapper[4833]: E0127 14:13:20.236913 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:13:20 crc kubenswrapper[4833]: E0127 14:13:20.236997 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs podName:71508df5-3756-4f7d-ba4a-5dc54fa67ba6 nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.236981756 +0000 UTC m=+165.888306158 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs") pod "network-metrics-daemon-jxvwd" (UID: "71508df5-3756-4f7d-ba4a-5dc54fa67ba6") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.312834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.312882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.312892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.312906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.312920 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.415794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.415919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.415943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.415967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.415985 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.519595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.519656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.519671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.519691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.519703 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.623034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.623142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.623161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.623191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.623210 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.726637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.726726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.726756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.726791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.726821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.829945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.830002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.830016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.830039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.830055 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.933472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.933523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.933536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.933585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:20 crc kubenswrapper[4833]: I0127 14:13:20.933601 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:20Z","lastTransitionTime":"2026-01-27T14:13:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.036655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.036721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.036733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.036748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.036761 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.139744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.139806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.139823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.139845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.139861 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.209599 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.209699 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.209771 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:21 crc kubenswrapper[4833]: E0127 14:13:21.209825 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:21 crc kubenswrapper[4833]: E0127 14:13:21.209902 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:21 crc kubenswrapper[4833]: E0127 14:13:21.209995 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.218975 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:30:40.779167451 +0000 UTC Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.242071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.242122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.242139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.242176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.242193 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.344403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.344509 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.344524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.344539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.344549 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.448044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.448098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.448119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.448145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.448165 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.551153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.551679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.551802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.551903 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.551996 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.654516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.654596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.654614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.654636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.654650 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.758022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.758094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.758113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.758138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.758162 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.861381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.861472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.861493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.861516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.861534 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.965694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.965769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.965816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.965850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:21 crc kubenswrapper[4833]: I0127 14:13:21.965872 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:21Z","lastTransitionTime":"2026-01-27T14:13:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.069770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.070278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.070567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.070791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.070973 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.173665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.174126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.174268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.174415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.174587 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.210271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:22 crc kubenswrapper[4833]: E0127 14:13:22.210763 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.219940 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:30:12.0819047 +0000 UTC Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.278004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.278099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.278124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.278155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.278179 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.381256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.381338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.381362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.381393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.381415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.484268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.484330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.484348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.484372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.484389 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.588683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.588762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.588788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.588818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.588843 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.696293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.696351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.696364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.696381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.696400 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.800791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.800862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.800889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.800915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.800932 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.906155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.906233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.906256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.906286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:22 crc kubenswrapper[4833]: I0127 14:13:22.906309 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:22Z","lastTransitionTime":"2026-01-27T14:13:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.009269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.009747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.009917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.010122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.010346 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.113848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.114335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.114484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.114630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.114757 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.210071 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.210127 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:23 crc kubenswrapper[4833]: E0127 14:13:23.210218 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.210082 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:23 crc kubenswrapper[4833]: E0127 14:13:23.210617 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:23 crc kubenswrapper[4833]: E0127 14:13:23.210706 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.216322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.216562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.216682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.216762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.216846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.220092 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:54:02.404288701 +0000 UTC Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.320314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.320373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.320389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.320407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.320421 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.423580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.423620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.423629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.423645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.423654 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.526362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.526398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.526411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.526427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.526437 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.629478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.629835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.629929 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.630027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.630125 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.733689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.734198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.734362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.734551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.734781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.838671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.838743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.838763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.838791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.838810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.942235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.942309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.942328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.942352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:23 crc kubenswrapper[4833]: I0127 14:13:23.942369 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:23Z","lastTransitionTime":"2026-01-27T14:13:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.045697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.045761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.045780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.045802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.045821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.148673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.148710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.148720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.148734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.148743 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.209635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.210361 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.220753 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:01:33.111199275 +0000 UTC Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.252306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.252395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.252511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.252563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.252599 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.356769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.357187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.357261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.357343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.357409 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.460666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.460712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.460726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.460748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.460761 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.564296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.564710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.564926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.565047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.565138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.667376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.667420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.667432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.667477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.667493 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.746787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.746857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.746874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.746899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.746923 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.762857 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.767895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.767941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.767952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.767968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.767981 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.780626 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.784753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.784849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.784868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.784890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.784938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.800041 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.805061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.805104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.805114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.805132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.805144 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.826701 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.831585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.831627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.831642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.831664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.831678 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.843159 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:24 crc kubenswrapper[4833]: E0127 14:13:24.843294 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.844875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.844912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.844923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.844938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.844949 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.948206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.948259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.948268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.948284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:24 crc kubenswrapper[4833]: I0127 14:13:24.948294 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:24Z","lastTransitionTime":"2026-01-27T14:13:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.051371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.051455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.051470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.051485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.051498 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.154543 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.154602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.154614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.154630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.154642 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.210943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.211050 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.210965 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:25 crc kubenswrapper[4833]: E0127 14:13:25.211339 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:25 crc kubenswrapper[4833]: E0127 14:13:25.211421 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:25 crc kubenswrapper[4833]: E0127 14:13:25.211651 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.221417 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:51:02.35836347 +0000 UTC Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.257969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.258018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.258044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.258067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.258084 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.361064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.361120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.361136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.361160 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.361176 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.464978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.465055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.465084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.465119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.465142 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.568840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.569272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.569293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.569313 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.569325 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.672120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.672190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.672215 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.672240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.672260 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.774776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.774857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.774867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.774879 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.774889 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.877984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.878033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.878044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.878061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.878073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.981590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.981663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.981679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.981701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:25 crc kubenswrapper[4833]: I0127 14:13:25.981716 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:25Z","lastTransitionTime":"2026-01-27T14:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.085288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.085402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.085425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.085508 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.085530 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.188956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.189046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.189064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.189084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.189099 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.210114 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:26 crc kubenswrapper[4833]: E0127 14:13:26.210985 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.211267 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:13:26 crc kubenswrapper[4833]: E0127 14:13:26.211988 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.222213 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:14:53.435228469 +0000 UTC Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.292210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.292308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.292332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.292351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.292365 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.395730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.395795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.395818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.395843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.395864 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.498652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.498724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.498744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.498814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.498836 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.602553 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.602634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.602660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.602692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.602781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.706427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.706587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.706617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.706649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.706674 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.811104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.811141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.811151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.811166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.811175 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.913956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.914004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.914021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.914042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:26 crc kubenswrapper[4833]: I0127 14:13:26.914056 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:26Z","lastTransitionTime":"2026-01-27T14:13:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.017194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.017274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.017297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.017326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.017348 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.119305 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.119360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.119377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.119504 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.119530 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.210826 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.210964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:27 crc kubenswrapper[4833]: E0127 14:13:27.211097 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:27 crc kubenswrapper[4833]: E0127 14:13:27.211207 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.210833 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:27 crc kubenswrapper[4833]: E0127 14:13:27.211365 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.222296 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:45:07.119614409 +0000 UTC Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.222363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.222406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.222418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.222434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.222462 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.325897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.326043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.326069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.326098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.326123 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.430589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.431020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.431220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.431361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.431515 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.534275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.534326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.534342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.534359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.534370 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.637476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.637839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.637904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.637967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.638033 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.740495 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.740841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.740976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.741065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.741158 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.844124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.844183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.844200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.844224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.844241 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.946783 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.946851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.946860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.946872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:27 crc kubenswrapper[4833]: I0127 14:13:27.946883 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:27Z","lastTransitionTime":"2026-01-27T14:13:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.049231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.049263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.049273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.049285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.049294 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.151533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.151573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.151584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.151601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.151611 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.210694 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:28 crc kubenswrapper[4833]: E0127 14:13:28.210834 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.223200 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:39:12.965252586 +0000 UTC Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.253723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.254113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.254322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.254530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.254710 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.357700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.357779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.357798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.357830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.357853 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.460057 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.460464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.460603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.460706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.460799 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.563275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.563321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.563331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.563345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.563355 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.666676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.666762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.666776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.666795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.666808 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.769425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.769493 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.769507 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.769525 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.769535 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.872378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.872422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.872434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.872483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.872494 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.975720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.976112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.976254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.976382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:28 crc kubenswrapper[4833]: I0127 14:13:28.976501 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:28Z","lastTransitionTime":"2026-01-27T14:13:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.078936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.078975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.078986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.079003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.079014 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.182679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.182738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.182757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.182780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.182798 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.210214 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:29 crc kubenswrapper[4833]: E0127 14:13:29.210361 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.210721 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.210764 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:29 crc kubenswrapper[4833]: E0127 14:13:29.210863 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:29 crc kubenswrapper[4833]: E0127 14:13:29.211035 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.223392 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:32:58.850361614 +0000 UTC Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.230877 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.249002 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.271752 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.292120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.292210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.292227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.292245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.292257 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.296311 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.311908 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.328746 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.344689 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.358066 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.371599 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.387974 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.394563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.394633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.394648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.394667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.394678 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.402232 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.421658 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.444774 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"47943 6899 services_controller.go:360] Finished syncing service controller-manager on namespace openshift-controller-manager for network=default : 1.939217ms\\\\nI0127 14:13:11.147908 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.57µs\\\\nI0127 14:13:11.148035 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:13:11.148116 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller\\\\nI0127 14:13:11.148160 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:13:11.148197 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:13:11.148227 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 14:13:11.148281 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 97.813µs\\\\nI0127 14:13:11.148339 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller\\\\nI0127 14:13:11.148389 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 51.062µs\\\\nF0127 14:13:11.148306 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.468816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.483948 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.498730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.498793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.498810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.498833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.498847 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.498694 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.513666 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.534196 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.550607 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:29Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.601194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.601235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.601244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.601283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.601297 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.705062 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.705151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.705183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.705213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.705235 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.808562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.808614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.808624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.808644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.808654 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.911532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.911617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.911637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.911665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:29 crc kubenswrapper[4833]: I0127 14:13:29.911683 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:29Z","lastTransitionTime":"2026-01-27T14:13:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.014420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.014489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.014532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.014552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.014563 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.118130 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.118195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.118213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.118236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.118310 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.210234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:30 crc kubenswrapper[4833]: E0127 14:13:30.210507 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.222172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.222213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.222224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.222240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.222249 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.223491 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:19:58.638789892 +0000 UTC Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.325887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.325973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.325996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.326023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.326051 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.429194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.429236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.429247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.429265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.429276 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.532393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.532499 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.532516 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.532542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.532561 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.634963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.635114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.635129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.635146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.635158 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.737996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.738092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.738109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.738129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.738143 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.841102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.841171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.841190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.841219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.841238 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.944234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.944306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.944333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.944361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:30 crc kubenswrapper[4833]: I0127 14:13:30.944383 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:30Z","lastTransitionTime":"2026-01-27T14:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.047584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.047650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.047660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.047675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.047685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.150944 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.151017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.151038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.151067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.151092 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.209819 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.209865 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.209955 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:31 crc kubenswrapper[4833]: E0127 14:13:31.210090 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:31 crc kubenswrapper[4833]: E0127 14:13:31.210330 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:31 crc kubenswrapper[4833]: E0127 14:13:31.210460 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.223904 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:06:21.547737855 +0000 UTC Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.252718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.252779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.252795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.252829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.252847 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.355284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.355335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.355348 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.355365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.355379 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.458814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.458878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.458892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.458912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.458927 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.561233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.561287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.561295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.561309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.561319 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.664546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.664613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.664629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.664652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.664670 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.767649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.767717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.767739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.767764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.767782 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.870733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.870794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.870811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.870835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.870852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.973471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.973591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.973607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.973653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:31 crc kubenswrapper[4833]: I0127 14:13:31.973670 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:31Z","lastTransitionTime":"2026-01-27T14:13:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.078085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.078763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.078793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.078824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.078846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.182352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.182482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.182515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.182542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.182564 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.209686 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:32 crc kubenswrapper[4833]: E0127 14:13:32.209939 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.224920 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:47:29.782486002 +0000 UTC Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.285544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.285624 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.285647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.285676 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.285700 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.389198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.389270 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.389295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.389324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.389346 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.492690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.492766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.492784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.492810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.492831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.595519 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.595601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.595628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.595657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.595677 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.698706 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.698752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.698760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.698774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.698785 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.801217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.801257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.801267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.801282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.801293 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.904069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.904112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.904124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.904139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:32 crc kubenswrapper[4833]: I0127 14:13:32.904150 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:32Z","lastTransitionTime":"2026-01-27T14:13:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.006840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.006888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.006901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.006917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.006928 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.110562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.110619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.110630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.110646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.110657 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.210215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.210287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.210232 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:33 crc kubenswrapper[4833]: E0127 14:13:33.210385 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:33 crc kubenswrapper[4833]: E0127 14:13:33.210506 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:33 crc kubenswrapper[4833]: E0127 14:13:33.210574 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.212561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.212602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.212620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.212640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.212658 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.225705 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:12:54.708340901 +0000 UTC Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.314734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.314772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.314780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.314795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.314805 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.417194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.417244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.417257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.417273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.417283 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.520463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.520687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.520700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.520718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.520734 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.623573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.623601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.623609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.623623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.623632 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.726355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.726437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.726505 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.726533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.726556 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.829640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.829697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.829712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.829727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.829737 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.933337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.933392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.933408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.933434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:33 crc kubenswrapper[4833]: I0127 14:13:33.933487 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:33Z","lastTransitionTime":"2026-01-27T14:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.036138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.036175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.036186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.036202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.036214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.138592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.138673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.138687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.138703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.138715 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.209561 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:34 crc kubenswrapper[4833]: E0127 14:13:34.209729 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.226191 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:20:50.49728422 +0000 UTC Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.240923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.240956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.240975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.240991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.241002 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.342972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.343736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.343776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.343802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.343814 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.452470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.452515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.452528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.452545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.452558 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.555821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.555892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.555910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.555935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.555954 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.661544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.661615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.661627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.661645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.661657 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.764796 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.764853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.764865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.764883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.764893 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.867866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.867910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.867925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.867948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.867965 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.925202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.925269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.925292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.925316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.925334 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: E0127 14:13:34.945367 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.950774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.950835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.950855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.950880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.950896 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: E0127 14:13:34.970833 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.979257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.979335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.979354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.979377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.979395 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:34 crc kubenswrapper[4833]: E0127 14:13:34.993169 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.997522 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.997566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.997578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.997594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:34 crc kubenswrapper[4833]: I0127 14:13:34.997605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:34Z","lastTransitionTime":"2026-01-27T14:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: E0127 14:13:35.011369 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.015807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.015831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.015838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.015851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.015860 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: E0127 14:13:35.030072 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0bb61b53-6253-4e68-9a38-d0d5935c7c24\\\",\\\"systemUUID\\\":\\\"6c7669d1-0a53-46b1-a135-adc3df727a2e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:35 crc kubenswrapper[4833]: E0127 14:13:35.030220 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.031744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.031786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.031795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.031809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.031819 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.134946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.135010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.135021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.135042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.135059 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.210692 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.210815 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.210848 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:35 crc kubenswrapper[4833]: E0127 14:13:35.210950 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:35 crc kubenswrapper[4833]: E0127 14:13:35.211201 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:35 crc kubenswrapper[4833]: E0127 14:13:35.211245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.227045 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 16:43:17.923421704 +0000 UTC Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.236723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.236756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.236765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.236778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.236788 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.339301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.339354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.339365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.339381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.339391 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.442656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.442715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.442727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.442744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.442758 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.545702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.545779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.545801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.545831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.545851 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.648805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.648853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.648865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.648883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.648895 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.751403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.751518 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.751537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.751561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.751578 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.854532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.854597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.854607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.854619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.854628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.958753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.958813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.958839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.958864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:35 crc kubenswrapper[4833]: I0127 14:13:35.958881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:35Z","lastTransitionTime":"2026-01-27T14:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.061489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.061560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.061579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.061604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.061623 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.164368 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.164414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.164430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.164471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.164487 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.210066 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:36 crc kubenswrapper[4833]: E0127 14:13:36.210297 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.227635 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:04:46.505950733 +0000 UTC Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.266902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.266972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.266995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.267023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.267047 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.370520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.370573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.370591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.370616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.370639 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.474336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.474395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.474414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.474432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.474483 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.577523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.577598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.577609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.577629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.577640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.681661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.681741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.681766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.681791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.681810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.786198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.786251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.786263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.786283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.786297 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.889780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.889827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.889838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.889856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.889867 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.992735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.992791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.992803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.992828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:36 crc kubenswrapper[4833]: I0127 14:13:36.992842 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:36Z","lastTransitionTime":"2026-01-27T14:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.095256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.095309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.095326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.095344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.095356 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.198578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.198623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.198631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.198645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.198654 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.209929 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.210594 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.210149 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:37 crc kubenswrapper[4833]: E0127 14:13:37.211006 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:37 crc kubenswrapper[4833]: E0127 14:13:37.211255 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:37 crc kubenswrapper[4833]: E0127 14:13:37.211321 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.228737 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:22:32.581614895 +0000 UTC Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.301750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.301812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.301829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.301848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.301859 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.404831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.404913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.404930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.404956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.404975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.507793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.507838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.507849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.507864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.507879 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.610801 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.610841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.610850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.610862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.610870 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.713695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.713750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.713761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.713790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.713798 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.816781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.816824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.816838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.816854 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.816866 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.920700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.920772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.920786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.920806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:37 crc kubenswrapper[4833]: I0127 14:13:37.920821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:37Z","lastTransitionTime":"2026-01-27T14:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.023387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.023438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.023464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.023476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.023504 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.125914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.125952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.125962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.125975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.125987 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.210246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:38 crc kubenswrapper[4833]: E0127 14:13:38.210415 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.228980 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 15:27:38.012709806 +0000 UTC Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.229202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.229251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.229261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.229276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.229285 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.332143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.332198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.332210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.332226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.332235 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.435272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.435334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.435346 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.435369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.435386 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.538767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.538807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.538816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.538829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.538839 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.641520 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.641582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.641598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.641621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.641663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.744405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.744486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.744501 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.744521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.744539 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.847379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.847422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.847461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.847482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.847492 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.950411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.950532 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.950557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.950587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:38 crc kubenswrapper[4833]: I0127 14:13:38.950609 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:38Z","lastTransitionTime":"2026-01-27T14:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.052968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.053018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.053029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.053046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.053059 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:39Z","lastTransitionTime":"2026-01-27T14:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:39 crc kubenswrapper[4833]: E0127 14:13:39.153498 4833 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.210465 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.210577 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:39 crc kubenswrapper[4833]: E0127 14:13:39.210602 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.210707 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:39 crc kubenswrapper[4833]: E0127 14:13:39.210946 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:39 crc kubenswrapper[4833]: E0127 14:13:39.211275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.224389 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bf5aca6ebdb2b41868357e580555331e2e1119831949911a7c50d4f6fb6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.229562 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:44:09.357009686 +0000 UTC Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.235771 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjvwp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa8c488b-eed2-4666-a5c3-6aa129655eee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://657807fdefa472b533a48a0945790af82f43eea77f3a73a489112b573acedb86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvv8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjvwp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.248434 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7ae050f-04d0-4da1-b503-82308f3481aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"13:11:55 +0000 UTC (now=2026-01-27 14:12:00.989201851 +0000 UTC))\\\\\\\"\\\\nI0127 14:12:00.989253 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0127 14:12:00.989276 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0127 14:12:00.989297 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989313 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0127 14:12:00.989350 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3823791944/tls.crt::/tmp/serving-cert-3823791944/tls.key\\\\\\\"\\\\nI0127 14:12:00.989475 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0127 14:12:00.991307 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991319 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0127 14:12:00.991333 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0127 14:12:00.991338 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0127 14:12:00.991404 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0127 14:12:00.991414 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0127 14:12:00.995694 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0127 14:12:00.995773 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0127 14:12:01.000979 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.261041 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51ac75ac5575e68f34e0f75ad58ed1490198eabbc6cb079043ab684a3945454e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.273499 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.284661 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da82868710e2d735fc986b4d1f07cafbcd778d40f4c33f353133e3fb0a31a2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19c645389e5938cb04c9a0a3ed9f2db46545902bb208047cd6f49479a003c8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.297782 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd82cea5-8cab-4c03-b640-2b4d45ba7e53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c61548ee1a22f170610f752f93e622ead23ee52d5144db835f6d5fa70891ee8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2d4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mcx7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.310783 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jftn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3250d272-0963-40d7-8e9b-7b0129ee4620\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c1a98b6f0421c0b06d21cb164e51be819cba8573b9fc4f8e54162dff889993e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7xplp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jftn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.325183 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgxvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxvwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: E0127 14:13:39.332287 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.338424 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2b7e889-5c68-451f-8b52-24dd0e803088\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5eeae009702c1c5383383096dd9e08b76c7b26e668d297540d5280b5d8f06a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fff372f3ddc0bd2ecbb9a9609bf6f2b18fc4e85959914278282a9e80b13de0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2c4639583c72a76701c3456f44805c6a7c9410d7852e408d004060f5514b23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f7fca653f52ba2ec98a6b0841208fbf4957f43feb6989f8a1a583728af9a024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.348215 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ec34ed6-cd96-48b7-8268-1962e1f3161d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e206520fefe3a26e199ebda2746e3a2f3416cf1830683dbbd2778341b7442e90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://431b0c55b5760340e06df228436e8b9df098e0fa9ef016d165371d50756e5ae2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.361819 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-k84ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f348cf7e-0a0b-400a-af50-1e342385c42d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bafd27108eb8bc8e1e67047a2d255a092040a85b5f73cb43cea4a050d4f03d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7c8030d5b858019f40ad409c91af93403d78099a6911320328f00b47f28188\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d972a46cf902f58f06a3b8b341b50852b06d9c45ec5f2f61577d6ddcaceb3535\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32c6576d8583f6d2e81aebca110157b90eb4dd679c0a183de46b743950461ea0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbdcb861d75c86c4d76b64b3c749a80d205c5a593a5902b75eadb02c9599743f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f90d899e76068db874c10ae47722eea991187a26330763b3bdd2ff0b616d957c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7f80d1ddcf5cfd593dc9415e80fc13b8d00fdbf8258a22e6e569216214cafb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s95v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-k84ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.377437 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"696d56dd-3ce4-489e-a258-677cf1fd8f9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:11Z\\\",\\\"message\\\":\\\"47943 6899 services_controller.go:360] Finished syncing service controller-manager on namespace openshift-controller-manager for network=default : 1.939217ms\\\\nI0127 14:13:11.147908 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-authentication-operator Admin Network Policy controller: took 4.57µs\\\\nI0127 14:13:11.148035 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:13:11.148116 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config in Admin Network Policy controller\\\\nI0127 14:13:11.148160 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:13:11.148197 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:13:11.148227 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 14:13:11.148281 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config Admin Network Policy controller: took 97.813µs\\\\nI0127 14:13:11.148339 6899 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-operator in Admin Network Policy controller\\\\nI0127 14:13:11.148389 6899 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-operator Admin Network Policy controller: took 51.062µs\\\\nF0127 14:13:11.148306 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:13:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:12:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fpdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jpt5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.389574 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f0a252b-f914-49af-8dda-dae82e062424\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://baa927042cbec4083eb59f3c607563b22c6ef4ec4655788db5b7fd59f9e88f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46485f306f5ddd3adb9a702c9c2537bad568f0881e3be82110cf31285e5cd3c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3191ba7071a26531cf6ebc1224120030f871b4f20377b1eb8dc0f4f3ebc38f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.402342 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.416372 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-npb46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7a7c135-ca95-4e75-b823-d1e45101a761\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:13:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:13:10Z\\\",\\\"message\\\":\\\"2026-01-27T14:12:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239\\\\n2026-01-27T14:12:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c1656ef2-e0a8-460a-88c3-14bbc9bab239 to /host/opt/cni/bin/\\\\n2026-01-27T14:12:25Z [verbose] multus-daemon started\\\\n2026-01-27T14:12:25Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:13:10Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:12:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:13:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktxr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-npb46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.428239 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24d7702c-8ba7-4782-a39f-5104f5878a28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec686bc276b49cae9a8df6fd757acee1bf3d264908679ffd2fd5c3fcd07094e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e18002e6d2ad1bd335cbbfebd14a6d0b43545baed9a6281c59b471a1472d6c95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:12:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5njfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:12:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q2f5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.446315 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18b4e278-90a6-4225-9a6a-9513b6393f8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:11:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287c3d615bd7e7385d8fc94b5f4f027cf04ed2a72cabd95418e37c39f258e112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://abfcbafd34ab0e5462f9dd60f4cc426423075c7e3404ec03135eb33ff057de79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e80827ed8a4e792d28c8a957e6b90d42a39a84644d09397d2188fb5fe7ffe6a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68835444b37ce96cd3f9dc6987bf696c32b5a72be91d12ef6b4e8f63c4466e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://647a6590aa4316529a2ebd0c2c85712942b774b361f53f127e8b12e139becb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:11:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d26693c9aabcc66e87daddec67f8d07ee9fda0a53e3236be050d0f878dd2331d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c7ec6b2872711bbca5af65545c3796c52ec2a0f2f51be8290a870e4ffe1fdf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9216f50da9f7b6a181cda1cc52fdba34bc4716e1e46414e6e1c4a87e30e99586\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:11:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:11:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:11:39Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:39 crc kubenswrapper[4833]: I0127 14:13:39.459076 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:12:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:13:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:13:40 crc kubenswrapper[4833]: I0127 14:13:40.209842 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:40 crc kubenswrapper[4833]: E0127 14:13:40.210359 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:40 crc kubenswrapper[4833]: I0127 14:13:40.230333 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:45:57.388098505 +0000 UTC Jan 27 14:13:41 crc kubenswrapper[4833]: I0127 14:13:41.209898 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:41 crc kubenswrapper[4833]: I0127 14:13:41.209974 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:41 crc kubenswrapper[4833]: E0127 14:13:41.210898 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:41 crc kubenswrapper[4833]: E0127 14:13:41.211126 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:41 crc kubenswrapper[4833]: I0127 14:13:41.211147 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:41 crc kubenswrapper[4833]: E0127 14:13:41.211362 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:41 crc kubenswrapper[4833]: I0127 14:13:41.213590 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:13:41 crc kubenswrapper[4833]: E0127 14:13:41.214540 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jpt5h_openshift-ovn-kubernetes(696d56dd-3ce4-489e-a258-677cf1fd8f9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" Jan 27 14:13:41 crc kubenswrapper[4833]: I0127 14:13:41.230486 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 00:59:42.083556945 +0000 UTC Jan 27 14:13:42 crc kubenswrapper[4833]: I0127 14:13:42.210048 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:42 crc kubenswrapper[4833]: E0127 14:13:42.210665 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:42 crc kubenswrapper[4833]: I0127 14:13:42.231496 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:41:25.676714652 +0000 UTC Jan 27 14:13:43 crc kubenswrapper[4833]: I0127 14:13:43.209701 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:43 crc kubenswrapper[4833]: I0127 14:13:43.209723 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:43 crc kubenswrapper[4833]: E0127 14:13:43.210550 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:43 crc kubenswrapper[4833]: I0127 14:13:43.209900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:43 crc kubenswrapper[4833]: E0127 14:13:43.210604 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:43 crc kubenswrapper[4833]: E0127 14:13:43.210763 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:43 crc kubenswrapper[4833]: I0127 14:13:43.231676 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:14:08.713784268 +0000 UTC Jan 27 14:13:44 crc kubenswrapper[4833]: I0127 14:13:44.209884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:44 crc kubenswrapper[4833]: E0127 14:13:44.210240 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:44 crc kubenswrapper[4833]: I0127 14:13:44.232377 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:06:35.393609482 +0000 UTC Jan 27 14:13:44 crc kubenswrapper[4833]: E0127 14:13:44.333860 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.210590 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.210590 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:45 crc kubenswrapper[4833]: E0127 14:13:45.210748 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:45 crc kubenswrapper[4833]: E0127 14:13:45.210811 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.210606 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:45 crc kubenswrapper[4833]: E0127 14:13:45.210899 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.233933 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:33:47.880647072 +0000 UTC Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.263900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.263954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.263965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.263981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.263990 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:13:45Z","lastTransitionTime":"2026-01-27T14:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.310285 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5"] Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.310781 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.312819 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.312917 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.313297 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.319819 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.348643 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podStartSLOduration=104.348614526 podStartE2EDuration="1m44.348614526s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.332765821 +0000 UTC m=+126.984090223" watchObservedRunningTime="2026-01-27 14:13:45.348614526 +0000 UTC m=+126.999938948" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.349067 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6jftn" podStartSLOduration=104.349056979 podStartE2EDuration="1m44.349056979s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.348996317 +0000 UTC m=+127.000320759" watchObservedRunningTime="2026-01-27 14:13:45.349056979 +0000 UTC m=+127.000381391" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.379752 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=70.379730079 podStartE2EDuration="1m10.379730079s" podCreationTimestamp="2026-01-27 14:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.379141182 +0000 UTC m=+127.030465594" watchObservedRunningTime="2026-01-27 14:13:45.379730079 +0000 UTC m=+127.031054481" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.394625 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=49.394602836 podStartE2EDuration="49.394602836s" podCreationTimestamp="2026-01-27 14:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.394189514 +0000 UTC m=+127.045513916" watchObservedRunningTime="2026-01-27 14:13:45.394602836 +0000 UTC m=+127.045927248" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.420038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.420087 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.420121 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.420153 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.420382 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.460183 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=98.460150421 podStartE2EDuration="1m38.460150421s" podCreationTimestamp="2026-01-27 14:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.460081289 +0000 UTC m=+127.111405691" watchObservedRunningTime="2026-01-27 14:13:45.460150421 +0000 UTC m=+127.111474823" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.521622 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.521678 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.521699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.521725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.521813 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.522640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.522755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.522797 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.537088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.548740 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-k84ff" podStartSLOduration=104.5487197 podStartE2EDuration="1m44.5487197s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.497871758 +0000 UTC m=+127.149196180" watchObservedRunningTime="2026-01-27 14:13:45.5487197 +0000 UTC m=+127.200044102" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.557188 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/730d0b0c-86c6-4ff7-aaa8-8d65e6749686-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lwgm5\" (UID: \"730d0b0c-86c6-4ff7-aaa8-8d65e6749686\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.608095 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=104.608074132 podStartE2EDuration="1m44.608074132s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.594280968 +0000 UTC m=+127.245605390" watchObservedRunningTime="2026-01-27 14:13:45.608074132 +0000 UTC m=+127.259398534" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.624316 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-npb46" podStartSLOduration=104.624290779 podStartE2EDuration="1m44.624290779s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.624177975 +0000 UTC m=+127.275502377" watchObservedRunningTime="2026-01-27 14:13:45.624290779 +0000 UTC m=+127.275615181" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.640264 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.661978 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q2f5d" podStartSLOduration=104.661954454 podStartE2EDuration="1m44.661954454s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.637845837 +0000 UTC m=+127.289170239" watchObservedRunningTime="2026-01-27 14:13:45.661954454 +0000 UTC m=+127.313278856" Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.662569 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=104.662560682 podStartE2EDuration="1m44.662560682s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.661766969 +0000 UTC m=+127.313091361" watchObservedRunningTime="2026-01-27 14:13:45.662560682 +0000 UTC m=+127.313885084" Jan 27 14:13:45 crc kubenswrapper[4833]: W0127 14:13:45.663378 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod730d0b0c_86c6_4ff7_aaa8_8d65e6749686.slice/crio-1f50e621eaf42bb5ba9f307bf7d739b2808e365096103229d9145fd2db690dc1 WatchSource:0}: Error finding container 1f50e621eaf42bb5ba9f307bf7d739b2808e365096103229d9145fd2db690dc1: Status 404 returned error can't find the container with id 1f50e621eaf42bb5ba9f307bf7d739b2808e365096103229d9145fd2db690dc1 Jan 27 14:13:45 crc kubenswrapper[4833]: I0127 14:13:45.712717 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xjvwp" podStartSLOduration=104.712692004 podStartE2EDuration="1m44.712692004s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:45.712358874 +0000 UTC m=+127.363683276" watchObservedRunningTime="2026-01-27 14:13:45.712692004 +0000 UTC m=+127.364016406" Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.209943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:46 crc kubenswrapper[4833]: E0127 14:13:46.210168 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.235993 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:42:23.858324504 +0000 UTC Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.236104 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.251583 4833 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.298753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" event={"ID":"730d0b0c-86c6-4ff7-aaa8-8d65e6749686","Type":"ContainerStarted","Data":"3ce5823747de0af91d7b5872be32e69465c3ca0aaa783373e4b765d39441910f"} Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.298865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" event={"ID":"730d0b0c-86c6-4ff7-aaa8-8d65e6749686","Type":"ContainerStarted","Data":"1f50e621eaf42bb5ba9f307bf7d739b2808e365096103229d9145fd2db690dc1"} Jan 27 14:13:46 crc kubenswrapper[4833]: I0127 14:13:46.313409 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lwgm5" podStartSLOduration=105.313389797 podStartE2EDuration="1m45.313389797s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:46.312894543 +0000 UTC m=+127.964218955" watchObservedRunningTime="2026-01-27 14:13:46.313389797 +0000 UTC m=+127.964714209" Jan 27 14:13:47 crc kubenswrapper[4833]: I0127 14:13:47.210435 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:47 crc kubenswrapper[4833]: I0127 14:13:47.210538 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:47 crc kubenswrapper[4833]: E0127 14:13:47.210619 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:47 crc kubenswrapper[4833]: E0127 14:13:47.210773 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:47 crc kubenswrapper[4833]: I0127 14:13:47.211022 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:47 crc kubenswrapper[4833]: E0127 14:13:47.211122 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:48 crc kubenswrapper[4833]: I0127 14:13:48.209931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:48 crc kubenswrapper[4833]: E0127 14:13:48.210353 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:49 crc kubenswrapper[4833]: I0127 14:13:49.210116 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:49 crc kubenswrapper[4833]: I0127 14:13:49.210169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:49 crc kubenswrapper[4833]: I0127 14:13:49.210214 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:49 crc kubenswrapper[4833]: E0127 14:13:49.211746 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:49 crc kubenswrapper[4833]: E0127 14:13:49.211818 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:49 crc kubenswrapper[4833]: E0127 14:13:49.211899 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:49 crc kubenswrapper[4833]: E0127 14:13:49.334767 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:13:50 crc kubenswrapper[4833]: I0127 14:13:50.209532 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:50 crc kubenswrapper[4833]: E0127 14:13:50.209734 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:51 crc kubenswrapper[4833]: I0127 14:13:51.210374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:51 crc kubenswrapper[4833]: I0127 14:13:51.210480 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:51 crc kubenswrapper[4833]: I0127 14:13:51.210374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:51 crc kubenswrapper[4833]: E0127 14:13:51.210581 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:51 crc kubenswrapper[4833]: E0127 14:13:51.210685 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:51 crc kubenswrapper[4833]: E0127 14:13:51.210747 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:52 crc kubenswrapper[4833]: I0127 14:13:52.210417 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:52 crc kubenswrapper[4833]: E0127 14:13:52.210678 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:53 crc kubenswrapper[4833]: I0127 14:13:53.209970 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:53 crc kubenswrapper[4833]: I0127 14:13:53.210031 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:53 crc kubenswrapper[4833]: I0127 14:13:53.210035 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:53 crc kubenswrapper[4833]: E0127 14:13:53.210152 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:53 crc kubenswrapper[4833]: E0127 14:13:53.210277 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:53 crc kubenswrapper[4833]: E0127 14:13:53.210421 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:54 crc kubenswrapper[4833]: I0127 14:13:54.210284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:54 crc kubenswrapper[4833]: E0127 14:13:54.210912 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:54 crc kubenswrapper[4833]: E0127 14:13:54.336125 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:13:55 crc kubenswrapper[4833]: I0127 14:13:55.210376 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:55 crc kubenswrapper[4833]: I0127 14:13:55.210534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:55 crc kubenswrapper[4833]: E0127 14:13:55.210595 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:55 crc kubenswrapper[4833]: I0127 14:13:55.210405 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:55 crc kubenswrapper[4833]: E0127 14:13:55.210754 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:55 crc kubenswrapper[4833]: E0127 14:13:55.210869 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:56 crc kubenswrapper[4833]: I0127 14:13:56.209844 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:56 crc kubenswrapper[4833]: E0127 14:13:56.210281 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:56 crc kubenswrapper[4833]: I0127 14:13:56.210600 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:13:56 crc kubenswrapper[4833]: I0127 14:13:56.337765 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/3.log" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.023773 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jxvwd"] Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.023900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:57 crc kubenswrapper[4833]: E0127 14:13:57.024007 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.210484 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.210515 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.210542 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:57 crc kubenswrapper[4833]: E0127 14:13:57.210616 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:57 crc kubenswrapper[4833]: E0127 14:13:57.210701 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:57 crc kubenswrapper[4833]: E0127 14:13:57.210740 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.345114 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/1.log" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.346033 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/0.log" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.346083 4833 generic.go:334] "Generic (PLEG): container finished" podID="b7a7c135-ca95-4e75-b823-d1e45101a761" containerID="67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f" exitCode=1 Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.346147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerDied","Data":"67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f"} Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.346191 4833 scope.go:117] "RemoveContainer" containerID="378a8201ac0412dcacb935b31b8a78c796801fd7aa374ed04d858aa0415fcf9e" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.346796 4833 scope.go:117] "RemoveContainer" containerID="67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f" Jan 27 14:13:57 crc kubenswrapper[4833]: E0127 14:13:57.347109 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-npb46_openshift-multus(b7a7c135-ca95-4e75-b823-d1e45101a761)\"" pod="openshift-multus/multus-npb46" podUID="b7a7c135-ca95-4e75-b823-d1e45101a761" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.349035 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/3.log" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.352333 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerStarted","Data":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.352984 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:13:57 crc kubenswrapper[4833]: I0127 14:13:57.388281 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podStartSLOduration=116.388260687 podStartE2EDuration="1m56.388260687s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:57.388100852 +0000 UTC m=+139.039425254" watchObservedRunningTime="2026-01-27 14:13:57.388260687 +0000 UTC m=+139.039585089" Jan 27 14:13:58 crc kubenswrapper[4833]: I0127 14:13:58.357735 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/1.log" Jan 27 14:13:59 crc kubenswrapper[4833]: I0127 14:13:59.209974 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:13:59 crc kubenswrapper[4833]: I0127 14:13:59.209978 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:13:59 crc kubenswrapper[4833]: I0127 14:13:59.209989 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:13:59 crc kubenswrapper[4833]: I0127 14:13:59.210029 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:13:59 crc kubenswrapper[4833]: E0127 14:13:59.211064 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:13:59 crc kubenswrapper[4833]: E0127 14:13:59.211187 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:13:59 crc kubenswrapper[4833]: E0127 14:13:59.211345 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:13:59 crc kubenswrapper[4833]: E0127 14:13:59.211706 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:13:59 crc kubenswrapper[4833]: E0127 14:13:59.336777 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:14:01 crc kubenswrapper[4833]: I0127 14:14:01.210352 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:01 crc kubenswrapper[4833]: I0127 14:14:01.210359 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:01 crc kubenswrapper[4833]: E0127 14:14:01.210592 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:01 crc kubenswrapper[4833]: I0127 14:14:01.210380 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:01 crc kubenswrapper[4833]: E0127 14:14:01.210650 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:01 crc kubenswrapper[4833]: I0127 14:14:01.210359 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:01 crc kubenswrapper[4833]: E0127 14:14:01.210743 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:01 crc kubenswrapper[4833]: E0127 14:14:01.210815 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:03 crc kubenswrapper[4833]: I0127 14:14:03.209942 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:03 crc kubenswrapper[4833]: I0127 14:14:03.210006 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:03 crc kubenswrapper[4833]: I0127 14:14:03.209955 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:03 crc kubenswrapper[4833]: E0127 14:14:03.210085 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:03 crc kubenswrapper[4833]: E0127 14:14:03.210181 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:03 crc kubenswrapper[4833]: I0127 14:14:03.210234 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:03 crc kubenswrapper[4833]: E0127 14:14:03.210302 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:03 crc kubenswrapper[4833]: E0127 14:14:03.210369 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:04 crc kubenswrapper[4833]: E0127 14:14:04.338579 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:14:05 crc kubenswrapper[4833]: I0127 14:14:05.210227 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:05 crc kubenswrapper[4833]: I0127 14:14:05.210280 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:05 crc kubenswrapper[4833]: I0127 14:14:05.210243 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:05 crc kubenswrapper[4833]: E0127 14:14:05.210394 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:05 crc kubenswrapper[4833]: I0127 14:14:05.210404 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:05 crc kubenswrapper[4833]: E0127 14:14:05.210571 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:05 crc kubenswrapper[4833]: E0127 14:14:05.210719 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:05 crc kubenswrapper[4833]: E0127 14:14:05.210819 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:07 crc kubenswrapper[4833]: I0127 14:14:07.210537 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:07 crc kubenswrapper[4833]: E0127 14:14:07.210679 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:07 crc kubenswrapper[4833]: I0127 14:14:07.210870 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:07 crc kubenswrapper[4833]: E0127 14:14:07.210918 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:07 crc kubenswrapper[4833]: I0127 14:14:07.211010 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:07 crc kubenswrapper[4833]: E0127 14:14:07.211064 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:07 crc kubenswrapper[4833]: I0127 14:14:07.211192 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:07 crc kubenswrapper[4833]: E0127 14:14:07.211254 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.089342 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.089622 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:16:11.089570879 +0000 UTC m=+272.740895321 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.190958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.191018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.191066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.191119 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191280 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191304 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191319 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191322 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191387 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:16:11.191366646 +0000 UTC m=+272.842691048 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191400 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191420 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191414 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191332 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191545 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:16:11.19152062 +0000 UTC m=+272.842845022 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191679 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:16:11.191644944 +0000 UTC m=+272.842969516 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.191712 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:16:11.191702685 +0000 UTC m=+272.843027317 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.210137 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.210218 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.210257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.211482 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:09 crc kubenswrapper[4833]: I0127 14:14:09.211534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.211678 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.211763 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.211844 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:09 crc kubenswrapper[4833]: E0127 14:14:09.339045 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:14:11 crc kubenswrapper[4833]: I0127 14:14:11.210590 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:11 crc kubenswrapper[4833]: I0127 14:14:11.210639 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:11 crc kubenswrapper[4833]: I0127 14:14:11.210718 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:11 crc kubenswrapper[4833]: E0127 14:14:11.210753 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:11 crc kubenswrapper[4833]: I0127 14:14:11.210877 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:11 crc kubenswrapper[4833]: E0127 14:14:11.211024 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:11 crc kubenswrapper[4833]: E0127 14:14:11.211171 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:11 crc kubenswrapper[4833]: E0127 14:14:11.211242 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:12 crc kubenswrapper[4833]: I0127 14:14:12.210771 4833 scope.go:117] "RemoveContainer" containerID="67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f" Jan 27 14:14:12 crc kubenswrapper[4833]: I0127 14:14:12.403395 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/1.log" Jan 27 14:14:12 crc kubenswrapper[4833]: I0127 14:14:12.403904 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerStarted","Data":"3ff19209ce0ef90cfad465697fd3b41d240f32ef7b2d01dd3d720eaed3f27367"} Jan 27 14:14:13 crc kubenswrapper[4833]: I0127 14:14:13.210111 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:13 crc kubenswrapper[4833]: I0127 14:14:13.210276 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:13 crc kubenswrapper[4833]: E0127 14:14:13.210319 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:14:13 crc kubenswrapper[4833]: I0127 14:14:13.210394 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:13 crc kubenswrapper[4833]: I0127 14:14:13.210439 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:13 crc kubenswrapper[4833]: E0127 14:14:13.210558 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxvwd" podUID="71508df5-3756-4f7d-ba4a-5dc54fa67ba6" Jan 27 14:14:13 crc kubenswrapper[4833]: E0127 14:14:13.210667 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:14:13 crc kubenswrapper[4833]: E0127 14:14:13.210771 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.210613 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.210631 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.210719 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.210750 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.213024 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.216123 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.216288 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.216348 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.216475 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.216593 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.585531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.668560 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k2dq7"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.669282 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.671644 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qs4fq"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.671921 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wblbf"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.672251 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.672584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.672598 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.673176 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.673409 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.673582 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.674790 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.674815 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.674819 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.674984 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.675023 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.675290 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.675389 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.675706 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.676233 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.676710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.677358 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wrnt6"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.678118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.678484 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.680647 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ghfvn"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.680896 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.681110 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.694024 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.696514 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.697038 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.697464 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.700477 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.720790 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.720927 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.721423 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.721553 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.724774 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.724999 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.725025 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.725437 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.725565 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.725600 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.725724 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726128 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726372 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726412 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726501 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726577 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726758 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726818 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726855 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.726940 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727051 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727134 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727067 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727288 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727327 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727667 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727764 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727720 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.727981 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728173 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728250 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728324 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728399 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728415 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728471 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728581 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728600 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728662 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728694 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728753 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728782 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728793 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728816 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728606 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728869 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728925 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728929 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728954 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.729143 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v59z8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.728962 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.729004 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.729002 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.729053 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.729649 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.730909 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.731082 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.731478 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.732393 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pbxl9"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.732895 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-rrz8c"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.733283 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.733759 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.733860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.734412 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.734520 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.734839 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rcfzx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.735521 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.736281 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sgpzc"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.736747 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.742512 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.748526 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzhbs"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.749036 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-p25n6"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.749431 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.749725 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.750094 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.751140 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.761984 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.765062 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.765987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.768965 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.769400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.769636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770001 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-audit-dir\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d523e68-740e-4514-a2eb-40ada703a657-images\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-serving-cert\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-node-pullsecrets\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770147 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-image-import-ca\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770163 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/76d19e5b-7dc6-48cb-946e-52f510d988ae-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b69ft\" (UID: \"76d19e5b-7dc6-48cb-946e-52f510d988ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zp5\" (UniqueName: \"kubernetes.io/projected/76d19e5b-7dc6-48cb-946e-52f510d988ae-kube-api-access-h4zp5\") pod \"cluster-samples-operator-665b6dd947-b69ft\" (UID: \"76d19e5b-7dc6-48cb-946e-52f510d988ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770201 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de88d5a6-e83d-469a-906d-1b56d17a6be9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770220 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f3d52-0137-4dc9-9306-970401a0f7af-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf5vl\" (UniqueName: \"kubernetes.io/projected/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-kube-api-access-mf5vl\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770288 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpps\" (UniqueName: \"kubernetes.io/projected/de88d5a6-e83d-469a-906d-1b56d17a6be9-kube-api-access-6gpps\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1713b903-c20e-4d3d-8c23-a27712d98f28-auth-proxy-config\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770331 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d523e68-740e-4514-a2eb-40ada703a657-config\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770366 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1713b903-c20e-4d3d-8c23-a27712d98f28-machine-approver-tls\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-config\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770404 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-etcd-serving-ca\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770423 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8hf\" (UniqueName: \"kubernetes.io/projected/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-kube-api-access-9r8hf\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713b903-c20e-4d3d-8c23-a27712d98f28-config\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770495 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-etcd-client\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770521 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-serving-cert\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770541 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-audit-dir\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770561 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grh6h\" (UniqueName: \"kubernetes.io/projected/1713b903-c20e-4d3d-8c23-a27712d98f28-kube-api-access-grh6h\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-client-ca\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770600 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crt68\" (UniqueName: \"kubernetes.io/projected/3005a862-7d67-4c98-9706-fca9dfc75ba0-kube-api-access-crt68\") pod \"downloads-7954f5f757-ghfvn\" (UID: \"3005a862-7d67-4c98-9706-fca9dfc75ba0\") " pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770617 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-service-ca-bundle\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770654 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-config\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770672 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770694 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvzw4\" (UniqueName: \"kubernetes.io/projected/6e9f3d52-0137-4dc9-9306-970401a0f7af-kube-api-access-hvzw4\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770723 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770739 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-audit-policies\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770755 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3420db2b-d99f-4c35-9423-1c2db40ac8da-serving-cert\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770771 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-encryption-config\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770789 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-audit\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770819 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d523e68-740e-4514-a2eb-40ada703a657-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770836 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-etcd-client\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770854 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-encryption-config\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5x7\" (UniqueName: \"kubernetes.io/projected/6d523e68-740e-4514-a2eb-40ada703a657-kube-api-access-xz5x7\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770887 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smplf\" (UniqueName: \"kubernetes.io/projected/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-kube-api-access-smplf\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-config\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770919 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-client-ca\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770937 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt24\" (UniqueName: \"kubernetes.io/projected/3420db2b-d99f-4c35-9423-1c2db40ac8da-kube-api-access-xbt24\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-serving-cert\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de88d5a6-e83d-469a-906d-1b56d17a6be9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.770991 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-config\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.819809 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.824421 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.828395 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.829573 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.829747 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.831011 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.831172 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.833840 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-czv2v"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.835208 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.835759 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.835812 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.836071 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.836585 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.847831 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.848164 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.848737 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.848963 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849085 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849225 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849748 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849856 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849884 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849348 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.850029 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.850114 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849621 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849327 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849475 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.850735 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.849681 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.850892 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.851293 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.851325 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.851374 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.851829 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.851848 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.851992 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852077 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852156 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852280 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852430 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852540 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852548 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852679 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852298 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852349 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.852966 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.854086 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.854200 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.854272 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.856332 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qs4fq"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.861313 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.863437 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.866667 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k2dq7"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.867270 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.867503 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.869807 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.870832 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.870947 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-client-ca\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871537 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-oauth-config\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871564 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871591 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-config\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-serving-cert\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871646 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de88d5a6-e83d-469a-906d-1b56d17a6be9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871668 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-service-ca\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/971371ea-478f-4f96-9693-9aa9a8897a38-tmpfs\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt24\" (UniqueName: \"kubernetes.io/projected/3420db2b-d99f-4c35-9423-1c2db40ac8da-kube-api-access-xbt24\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871772 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-config\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871792 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-audit-dir\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871809 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcsww\" (UniqueName: \"kubernetes.io/projected/9a6599b1-062c-49b9-96fd-c6ddf5464938-kube-api-access-vcsww\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-serving-cert\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871852 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/679aa60c-9c8d-4596-81b3-b582dd821f2f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q7b7m\" (UID: \"679aa60c-9c8d-4596-81b3-b582dd821f2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-serving-cert\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871901 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdc8c\" (UniqueName: \"kubernetes.io/projected/15e99fdb-21ba-4e48-a4d3-6e93f9907413-kube-api-access-sdc8c\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbwq\" (UniqueName: \"kubernetes.io/projected/3ed10be6-6292-4eed-abf0-14117bc24266-kube-api-access-9tbwq\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d523e68-740e-4514-a2eb-40ada703a657-images\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871957 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.871979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/971371ea-478f-4f96-9693-9aa9a8897a38-apiservice-cert\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp57v\" (UniqueName: \"kubernetes.io/projected/d53bc718-d5ff-48e8-baa2-a1068fdba801-kube-api-access-cp57v\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-image-import-ca\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/76d19e5b-7dc6-48cb-946e-52f510d988ae-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b69ft\" (UID: \"76d19e5b-7dc6-48cb-946e-52f510d988ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-oauth-serving-cert\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872102 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-node-pullsecrets\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872123 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d53bc718-d5ff-48e8-baa2-a1068fdba801-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f3d52-0137-4dc9-9306-970401a0f7af-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872173 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zp5\" (UniqueName: \"kubernetes.io/projected/76d19e5b-7dc6-48cb-946e-52f510d988ae-kube-api-access-h4zp5\") pod \"cluster-samples-operator-665b6dd947-b69ft\" (UID: \"76d19e5b-7dc6-48cb-946e-52f510d988ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872194 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de88d5a6-e83d-469a-906d-1b56d17a6be9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872209 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gpps\" (UniqueName: \"kubernetes.io/projected/de88d5a6-e83d-469a-906d-1b56d17a6be9-kube-api-access-6gpps\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872230 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8x8v\" (UniqueName: \"kubernetes.io/projected/971371ea-478f-4f96-9693-9aa9a8897a38-kube-api-access-d8x8v\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf5vl\" (UniqueName: \"kubernetes.io/projected/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-kube-api-access-mf5vl\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d53bc718-d5ff-48e8-baa2-a1068fdba801-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1713b903-c20e-4d3d-8c23-a27712d98f28-auth-proxy-config\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d523e68-740e-4514-a2eb-40ada703a657-config\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1713b903-c20e-4d3d-8c23-a27712d98f28-machine-approver-tls\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872373 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-config\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-etcd-serving-ca\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-metrics-certs\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872435 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4fx\" (UniqueName: \"kubernetes.io/projected/2b22849d-9632-4f0a-96f4-997aa91300eb-kube-api-access-cm4fx\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872487 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8hf\" (UniqueName: \"kubernetes.io/projected/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-kube-api-access-9r8hf\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872507 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713b903-c20e-4d3d-8c23-a27712d98f28-config\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-etcd-client\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872545 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-serving-cert\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-audit-dir\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b22849d-9632-4f0a-96f4-997aa91300eb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-trusted-ca-bundle\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872607 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/971371ea-478f-4f96-9693-9aa9a8897a38-webhook-cert\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grh6h\" (UniqueName: \"kubernetes.io/projected/1713b903-c20e-4d3d-8c23-a27712d98f28-kube-api-access-grh6h\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872637 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-client-ca\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed10be6-6292-4eed-abf0-14117bc24266-config\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed10be6-6292-4eed-abf0-14117bc24266-serving-cert\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crt68\" (UniqueName: \"kubernetes.io/projected/3005a862-7d67-4c98-9706-fca9dfc75ba0-kube-api-access-crt68\") pod \"downloads-7954f5f757-ghfvn\" (UID: \"3005a862-7d67-4c98-9706-fca9dfc75ba0\") " pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872715 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-config\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-stats-auth\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-service-ca-bundle\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872762 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6599b1-062c-49b9-96fd-c6ddf5464938-service-ca-bundle\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcg6\" (UniqueName: \"kubernetes.io/projected/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-kube-api-access-8lcg6\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872794 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-config\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvzw4\" (UniqueName: \"kubernetes.io/projected/6e9f3d52-0137-4dc9-9306-970401a0f7af-kube-api-access-hvzw4\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872851 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872867 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-default-certificate\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872883 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ed10be6-6292-4eed-abf0-14117bc24266-trusted-ca\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3420db2b-d99f-4c35-9423-1c2db40ac8da-serving-cert\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-encryption-config\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-audit-policies\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872942 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-audit\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/679aa60c-9c8d-4596-81b3-b582dd821f2f-kube-api-access-cxv6v\") pod \"package-server-manager-789f6589d5-q7b7m\" (UID: \"679aa60c-9c8d-4596-81b3-b582dd821f2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-config\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872980 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-serving-cert\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.872997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d523e68-740e-4514-a2eb-40ada703a657-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873013 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-etcd-client\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873030 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-encryption-config\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b22849d-9632-4f0a-96f4-997aa91300eb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smplf\" (UniqueName: \"kubernetes.io/projected/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-kube-api-access-smplf\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873082 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d53bc718-d5ff-48e8-baa2-a1068fdba801-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz5x7\" (UniqueName: \"kubernetes.io/projected/6d523e68-740e-4514-a2eb-40ada703a657-kube-api-access-xz5x7\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.873738 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-client-ca\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.875715 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.875925 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-audit-dir\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.876216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-node-pullsecrets\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.876591 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1713b903-c20e-4d3d-8c23-a27712d98f28-config\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.876952 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.877669 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n2rfz"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.877789 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-config\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.878242 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d523e68-740e-4514-a2eb-40ada703a657-config\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.878282 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-audit\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.878627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.878667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.879038 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1713b903-c20e-4d3d-8c23-a27712d98f28-auth-proxy-config\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.879042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de88d5a6-e83d-469a-906d-1b56d17a6be9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.879483 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.879489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-config\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.880091 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d523e68-740e-4514-a2eb-40ada703a657-images\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.880464 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-config\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.880524 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d523e68-740e-4514-a2eb-40ada703a657-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.882739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.882768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-audit-dir\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.883420 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-image-import-ca\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.883721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-etcd-serving-ca\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.883764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-service-ca-bundle\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.884015 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.884466 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-audit-policies\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.885651 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.886632 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-client-ca\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.890488 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-encryption-config\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.885791 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.891406 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.890986 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.895191 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.895401 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.895636 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.896282 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.897208 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.897712 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-serving-cert\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.897903 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-serving-cert\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.897927 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-encryption-config\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.898079 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f3d52-0137-4dc9-9306-970401a0f7af-serving-cert\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.898083 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-etcd-client\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.898223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de88d5a6-e83d-469a-906d-1b56d17a6be9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.898329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-serving-cert\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.899058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1713b903-c20e-4d3d-8c23-a27712d98f28-machine-approver-tls\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.899696 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/76d19e5b-7dc6-48cb-946e-52f510d988ae-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-b69ft\" (UID: \"76d19e5b-7dc6-48cb-946e-52f510d988ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.900108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-etcd-client\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.900647 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.901558 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.902841 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.903682 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.915098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3420db2b-d99f-4c35-9423-1c2db40ac8da-serving-cert\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.916035 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9rt6c"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.917306 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.925306 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.926818 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bs9ln"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.928815 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-plzjs"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.929247 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.931180 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wrnt6"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.931222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.931289 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.932976 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.934187 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.935216 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.936800 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rrz8c"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.938542 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.940233 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v59z8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.941281 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wblbf"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.942438 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.943528 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.944462 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.945624 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pbxl9"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.947339 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzhbs"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.948186 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.948764 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.950252 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.950820 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.952365 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ghfvn"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.953653 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rcfzx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.954495 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.955549 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.956553 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n2rfz"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.957574 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-czv2v"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.958594 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.959653 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.960770 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xcpgj"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.961595 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.961769 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gccvh"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.962534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.963216 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bs9ln"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.964255 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.965197 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.966219 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.967267 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sgpzc"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.968344 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.969535 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xcpgj"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.971179 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.972294 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.972641 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9rt6c"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.973746 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcsww\" (UniqueName: \"kubernetes.io/projected/9a6599b1-062c-49b9-96fd-c6ddf5464938-kube-api-access-vcsww\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.973786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-serving-cert\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.973843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/679aa60c-9c8d-4596-81b3-b582dd821f2f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q7b7m\" (UID: \"679aa60c-9c8d-4596-81b3-b582dd821f2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.973875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbwq\" (UniqueName: \"kubernetes.io/projected/3ed10be6-6292-4eed-abf0-14117bc24266-kube-api-access-9tbwq\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.973934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdc8c\" (UniqueName: \"kubernetes.io/projected/15e99fdb-21ba-4e48-a4d3-6e93f9907413-kube-api-access-sdc8c\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.973983 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/971371ea-478f-4f96-9693-9aa9a8897a38-apiservice-cert\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp57v\" (UniqueName: \"kubernetes.io/projected/d53bc718-d5ff-48e8-baa2-a1068fdba801-kube-api-access-cp57v\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-oauth-serving-cert\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974204 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d53bc718-d5ff-48e8-baa2-a1068fdba801-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974259 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8x8v\" (UniqueName: \"kubernetes.io/projected/971371ea-478f-4f96-9693-9aa9a8897a38-kube-api-access-d8x8v\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974292 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d53bc718-d5ff-48e8-baa2-a1068fdba801-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974336 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-metrics-certs\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974373 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4fx\" (UniqueName: \"kubernetes.io/projected/2b22849d-9632-4f0a-96f4-997aa91300eb-kube-api-access-cm4fx\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b22849d-9632-4f0a-96f4-997aa91300eb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.974469 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-trusted-ca-bundle\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.975204 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.975307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/971371ea-478f-4f96-9693-9aa9a8897a38-webhook-cert\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976554 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed10be6-6292-4eed-abf0-14117bc24266-config\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.975557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d53bc718-d5ff-48e8-baa2-a1068fdba801-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed10be6-6292-4eed-abf0-14117bc24266-serving-cert\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-config\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976643 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-stats-auth\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976667 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6599b1-062c-49b9-96fd-c6ddf5464938-service-ca-bundle\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lcg6\" (UniqueName: \"kubernetes.io/projected/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-kube-api-access-8lcg6\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976763 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-default-certificate\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-oauth-serving-cert\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ed10be6-6292-4eed-abf0-14117bc24266-trusted-ca\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976822 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/679aa60c-9c8d-4596-81b3-b582dd821f2f-kube-api-access-cxv6v\") pod \"package-server-manager-789f6589d5-q7b7m\" (UID: \"679aa60c-9c8d-4596-81b3-b582dd821f2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-serving-cert\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b22849d-9632-4f0a-96f4-997aa91300eb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976912 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d53bc718-d5ff-48e8-baa2-a1068fdba801-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976940 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-trusted-ca-bundle\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-oauth-config\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976977 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-service-ca\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.976997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/971371ea-478f-4f96-9693-9aa9a8897a38-tmpfs\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.977321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ed10be6-6292-4eed-abf0-14117bc24266-config\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.977504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.977592 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/971371ea-478f-4f96-9693-9aa9a8897a38-tmpfs\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.984187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ed10be6-6292-4eed-abf0-14117bc24266-serving-cert\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.984232 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-service-ca\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.984521 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-plzjs"] Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.984653 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-serving-cert\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.985157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d53bc718-d5ff-48e8-baa2-a1068fdba801-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.985179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-serving-cert\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.985198 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-oauth-config\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.985616 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b22849d-9632-4f0a-96f4-997aa91300eb-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.985620 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-config\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.986102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ed10be6-6292-4eed-abf0-14117bc24266-trusted-ca\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.992355 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 14:14:15 crc kubenswrapper[4833]: I0127 14:14:15.995133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b22849d-9632-4f0a-96f4-997aa91300eb-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.012018 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.032641 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.051184 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.072851 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.092710 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.101829 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-default-certificate\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.111565 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.131419 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.151633 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.172366 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.178647 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-metrics-certs\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.191911 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.199728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9a6599b1-062c-49b9-96fd-c6ddf5464938-stats-auth\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.211523 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.231687 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.237833 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6599b1-062c-49b9-96fd-c6ddf5464938-service-ca-bundle\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.252343 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.259317 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/971371ea-478f-4f96-9693-9aa9a8897a38-webhook-cert\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.260157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/971371ea-478f-4f96-9693-9aa9a8897a38-apiservice-cert\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.271524 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.292081 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.312317 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.332496 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.351806 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.371856 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.392570 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.412890 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.432296 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.440974 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/679aa60c-9c8d-4596-81b3-b582dd821f2f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q7b7m\" (UID: \"679aa60c-9c8d-4596-81b3-b582dd821f2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.452561 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.472309 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.493999 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.513080 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.540883 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.551931 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.573029 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.592340 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.612336 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.632539 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.661522 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.672798 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.691472 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.711753 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.733241 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.752272 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.772569 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.792034 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.812528 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.832775 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.852354 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.870113 4833 request.go:700] Waited for 1.018542716s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.873149 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.892797 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.932735 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.952690 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 14:14:16 crc kubenswrapper[4833]: I0127 14:14:16.972598 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.012385 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.031577 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.051012 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.072131 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.091489 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.133130 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz5x7\" (UniqueName: \"kubernetes.io/projected/6d523e68-740e-4514-a2eb-40ada703a657-kube-api-access-xz5x7\") pod \"machine-api-operator-5694c8668f-wblbf\" (UID: \"6d523e68-740e-4514-a2eb-40ada703a657\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.151401 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt24\" (UniqueName: \"kubernetes.io/projected/3420db2b-d99f-4c35-9423-1c2db40ac8da-kube-api-access-xbt24\") pod \"controller-manager-879f6c89f-qs4fq\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.173559 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grh6h\" (UniqueName: \"kubernetes.io/projected/1713b903-c20e-4d3d-8c23-a27712d98f28-kube-api-access-grh6h\") pod \"machine-approver-56656f9798-c44jl\" (UID: \"1713b903-c20e-4d3d-8c23-a27712d98f28\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.192245 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.200582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crt68\" (UniqueName: \"kubernetes.io/projected/3005a862-7d67-4c98-9706-fca9dfc75ba0-kube-api-access-crt68\") pod \"downloads-7954f5f757-ghfvn\" (UID: \"3005a862-7d67-4c98-9706-fca9dfc75ba0\") " pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.211926 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.232782 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.258720 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.272554 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smplf\" (UniqueName: \"kubernetes.io/projected/2c3f34ad-6667-4cd7-a786-b2cbdc9f1668-kube-api-access-smplf\") pod \"authentication-operator-69f744f599-wrnt6\" (UID: \"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.286766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zp5\" (UniqueName: \"kubernetes.io/projected/76d19e5b-7dc6-48cb-946e-52f510d988ae-kube-api-access-h4zp5\") pod \"cluster-samples-operator-665b6dd947-b69ft\" (UID: \"76d19e5b-7dc6-48cb-946e-52f510d988ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.306838 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8hf\" (UniqueName: \"kubernetes.io/projected/a5ddd293-64a8-4907-a786-a4c0b5d57ab9-kube-api-access-9r8hf\") pod \"apiserver-76f77b778f-k2dq7\" (UID: \"a5ddd293-64a8-4907-a786-a4c0b5d57ab9\") " pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.327116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf5vl\" (UniqueName: \"kubernetes.io/projected/7a45b5f3-7e9f-47b5-87eb-457c581c9fff-kube-api-access-mf5vl\") pod \"apiserver-7bbb656c7d-fqhh4\" (UID: \"7a45b5f3-7e9f-47b5-87eb-457c581c9fff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.338891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.359324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvzw4\" (UniqueName: \"kubernetes.io/projected/6e9f3d52-0137-4dc9-9306-970401a0f7af-kube-api-access-hvzw4\") pod \"route-controller-manager-6576b87f9c-zrj2p\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.365491 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.367765 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.368394 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gpps\" (UniqueName: \"kubernetes.io/projected/de88d5a6-e83d-469a-906d-1b56d17a6be9-kube-api-access-6gpps\") pod \"openshift-apiserver-operator-796bbdcf4f-8n5cx\" (UID: \"de88d5a6-e83d-469a-906d-1b56d17a6be9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.372848 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.391691 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.416697 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.426714 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.428785 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.432157 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.437689 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.442845 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.454802 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.472810 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: W0127 14:14:17.484119 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1713b903_c20e_4d3d_8c23_a27712d98f28.slice/crio-39d84975a3c88f455be7707e4344a865d14d88a314732080010e612ccd3213d5 WatchSource:0}: Error finding container 39d84975a3c88f455be7707e4344a865d14d88a314732080010e612ccd3213d5: Status 404 returned error can't find the container with id 39d84975a3c88f455be7707e4344a865d14d88a314732080010e612ccd3213d5 Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.492434 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.502154 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qs4fq"] Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.512163 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.525478 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.529101 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.532809 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 14:14:17 crc kubenswrapper[4833]: W0127 14:14:17.544299 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3420db2b_d99f_4c35_9423_1c2db40ac8da.slice/crio-157b489422808d2b6c6375d585200d4bc3b4c0f73c85b2d05ad58cf2db0b3603 WatchSource:0}: Error finding container 157b489422808d2b6c6375d585200d4bc3b4c0f73c85b2d05ad58cf2db0b3603: Status 404 returned error can't find the container with id 157b489422808d2b6c6375d585200d4bc3b4c0f73c85b2d05ad58cf2db0b3603 Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.551762 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.572652 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.594958 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.612142 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.631546 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.651913 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.676874 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.693134 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.712536 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.734292 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.752202 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.772721 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.785970 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wblbf"] Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.794808 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 14:14:17 crc kubenswrapper[4833]: W0127 14:14:17.801314 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d523e68_740e_4514_a2eb_40ada703a657.slice/crio-3fd92e6bd0738edcc64aa6fdb9e69008332b7b096cbbbe6e5874cc2169ec6eb5 WatchSource:0}: Error finding container 3fd92e6bd0738edcc64aa6fdb9e69008332b7b096cbbbe6e5874cc2169ec6eb5: Status 404 returned error can't find the container with id 3fd92e6bd0738edcc64aa6fdb9e69008332b7b096cbbbe6e5874cc2169ec6eb5 Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.807182 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p"] Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.816914 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.818551 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft"] Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.832360 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.853956 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.854470 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.870687 4833 request.go:700] Waited for 1.90795153s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.874889 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.892269 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.912119 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.946223 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ghfvn"] Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.949765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4fx\" (UniqueName: \"kubernetes.io/projected/2b22849d-9632-4f0a-96f4-997aa91300eb-kube-api-access-cm4fx\") pod \"openshift-controller-manager-operator-756b6f6bc6-x5778\" (UID: \"2b22849d-9632-4f0a-96f4-997aa91300eb\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.949853 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4"] Jan 27 14:14:17 crc kubenswrapper[4833]: W0127 14:14:17.958464 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a45b5f3_7e9f_47b5_87eb_457c581c9fff.slice/crio-b8711802ae44a9084902082b90b5083ab89b7f023d52d17cf20bfff8150d1a1f WatchSource:0}: Error finding container b8711802ae44a9084902082b90b5083ab89b7f023d52d17cf20bfff8150d1a1f: Status 404 returned error can't find the container with id b8711802ae44a9084902082b90b5083ab89b7f023d52d17cf20bfff8150d1a1f Jan 27 14:14:17 crc kubenswrapper[4833]: W0127 14:14:17.961573 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3005a862_7d67_4c98_9706_fca9dfc75ba0.slice/crio-212342f3d5a6df53581242f9fbe4bd6f91db748c760b7fde32713b6b5f757747 WatchSource:0}: Error finding container 212342f3d5a6df53581242f9fbe4bd6f91db748c760b7fde32713b6b5f757747: Status 404 returned error can't find the container with id 212342f3d5a6df53581242f9fbe4bd6f91db748c760b7fde32713b6b5f757747 Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.967294 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcsww\" (UniqueName: \"kubernetes.io/projected/9a6599b1-062c-49b9-96fd-c6ddf5464938-kube-api-access-vcsww\") pod \"router-default-5444994796-p25n6\" (UID: \"9a6599b1-062c-49b9-96fd-c6ddf5464938\") " pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.991334 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp57v\" (UniqueName: \"kubernetes.io/projected/d53bc718-d5ff-48e8-baa2-a1068fdba801-kube-api-access-cp57v\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:17 crc kubenswrapper[4833]: I0127 14:14:17.999945 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wrnt6"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.006223 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.012129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d53bc718-d5ff-48e8-baa2-a1068fdba801-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tngc9\" (UID: \"d53bc718-d5ff-48e8-baa2-a1068fdba801\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:18 crc kubenswrapper[4833]: W0127 14:14:18.020680 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c3f34ad_6667_4cd7_a786_b2cbdc9f1668.slice/crio-8ad868915e6fa05c5d9bc32064cf1a18b138e06ea6e49fad5a4bfca885ea04ad WatchSource:0}: Error finding container 8ad868915e6fa05c5d9bc32064cf1a18b138e06ea6e49fad5a4bfca885ea04ad: Status 404 returned error can't find the container with id 8ad868915e6fa05c5d9bc32064cf1a18b138e06ea6e49fad5a4bfca885ea04ad Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.020948 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k2dq7"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.042309 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8x8v\" (UniqueName: \"kubernetes.io/projected/971371ea-478f-4f96-9693-9aa9a8897a38-kube-api-access-d8x8v\") pod \"packageserver-d55dfcdfc-lgnww\" (UID: \"971371ea-478f-4f96-9693-9aa9a8897a38\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.050247 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.052556 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdc8c\" (UniqueName: \"kubernetes.io/projected/15e99fdb-21ba-4e48-a4d3-6e93f9907413-kube-api-access-sdc8c\") pod \"console-f9d7485db-rrz8c\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.073229 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.073884 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbwq\" (UniqueName: \"kubernetes.io/projected/3ed10be6-6292-4eed-abf0-14117bc24266-kube-api-access-9tbwq\") pod \"console-operator-58897d9998-v59z8\" (UID: \"3ed10be6-6292-4eed-abf0-14117bc24266\") " pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.085073 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.089112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lcg6\" (UniqueName: \"kubernetes.io/projected/84fae95f-8d24-4b0b-ab4d-a73565d4b64e-kube-api-access-8lcg6\") pod \"openshift-config-operator-7777fb866f-8jwlr\" (UID: \"84fae95f-8d24-4b0b-ab4d-a73565d4b64e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.114409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxv6v\" (UniqueName: \"kubernetes.io/projected/679aa60c-9c8d-4596-81b3-b582dd821f2f-kube-api-access-cxv6v\") pod \"package-server-manager-789f6589d5-q7b7m\" (UID: \"679aa60c-9c8d-4596-81b3-b582dd821f2f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.135166 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.149632 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.172412 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20979785-edd8-4fca-96b2-3c7eb89cce18-metrics-tls\") pod \"dns-operator-744455d44c-sgpzc\" (UID: \"20979785-edd8-4fca-96b2-3c7eb89cce18\") " pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80be9928-86a1-457d-b9b8-62a5a455362a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205854 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205878 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-tls\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205960 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.205982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206016 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-certificates\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206035 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206057 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s92vn\" (UniqueName: \"kubernetes.io/projected/80be9928-86a1-457d-b9b8-62a5a455362a-kube-api-access-s92vn\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810332c8-987e-485f-9940-d1b61944b1a8-audit-dir\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206095 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rncq\" (UniqueName: \"kubernetes.io/projected/20979785-edd8-4fca-96b2-3c7eb89cce18-kube-api-access-2rncq\") pod \"dns-operator-744455d44c-sgpzc\" (UID: \"20979785-edd8-4fca-96b2-3c7eb89cce18\") " pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206131 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efb05803-c39a-4610-9448-5950f6aa84f0-serving-cert\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206155 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/80be9928-86a1-457d-b9b8-62a5a455362a-proxy-tls\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-service-ca\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206203 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-trusted-ca\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206276 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-ca\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206344 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-config\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206410 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206431 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92rr\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-kube-api-access-t92rr\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206504 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-config\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206527 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-bound-sa-token\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-client\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206573 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dk6q\" (UniqueName: \"kubernetes.io/projected/efb05803-c39a-4610-9448-5950f6aa84f0-kube-api-access-7dk6q\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206595 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206613 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2lc\" (UniqueName: \"kubernetes.io/projected/810332c8-987e-485f-9940-d1b61944b1a8-kube-api-access-9k2lc\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.206670 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-audit-policies\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.207041 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:18.707025679 +0000 UTC m=+160.358350171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.307544 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308123 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlzg\" (UniqueName: \"kubernetes.io/projected/579a9e43-c282-4a46-ab4a-4b4564a8344f-kube-api-access-9qlzg\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d9a6109-4066-468d-ba64-f28d15274e91-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308195 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-tls\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902663e2-9d1b-47a2-af8b-fcd67c717b70-secret-volume\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308240 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308272 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-certificates\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308306 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/652f5322-b51b-4901-86e6-119f342d3c7c-cert\") pod \"ingress-canary-plzjs\" (UID: \"652f5322-b51b-4901-86e6-119f342d3c7c\") " pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308339 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6f6\" (UniqueName: \"kubernetes.io/projected/cb0ef16c-4892-4013-9fb2-0826a86ee88c-kube-api-access-bw6f6\") pod \"multus-admission-controller-857f4d67dd-9rt6c\" (UID: \"cb0ef16c-4892-4013-9fb2-0826a86ee88c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308354 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7023131d-ca42-4f25-a67c-80007823bf08-config\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/579a9e43-c282-4a46-ab4a-4b4564a8344f-trusted-ca\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f116ab69-14f9-4136-904c-730947658d83-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80bbbc08-a5ce-4139-837b-ad932e055904-serving-cert\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/579a9e43-c282-4a46-ab4a-4b4564a8344f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308868 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvzg\" (UniqueName: \"kubernetes.io/projected/7b818454-cee7-4a54-a628-1358afa71ef8-kube-api-access-vwvzg\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.308981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-ca\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.309048 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.309087 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hw5l\" (UniqueName: \"kubernetes.io/projected/9611f809-5d0e-47d9-90ad-b0799b4b786b-kube-api-access-8hw5l\") pod \"control-plane-machine-set-operator-78cbb6b69f-gmnmv\" (UID: \"9611f809-5d0e-47d9-90ad-b0799b4b786b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.309119 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.309935 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-ca\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.310319 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:18.810297977 +0000 UTC m=+160.461622379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310391 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-config\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310433 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-registration-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310580 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-certificates\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310596 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09f62199-a975-4b50-8872-2e9c47b174ec-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310704 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mdlh\" (UniqueName: \"kubernetes.io/projected/652f5322-b51b-4901-86e6-119f342d3c7c-kube-api-access-7mdlh\") pod \"ingress-canary-plzjs\" (UID: \"652f5322-b51b-4901-86e6-119f342d3c7c\") " pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-config\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.310982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhrc\" (UniqueName: \"kubernetes.io/projected/9704f9c7-174a-4eb5-9045-2cc38c248bdc-kube-api-access-wdhrc\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311004 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1efa337-030b-4790-be14-301fd44a869c-metrics-tls\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74b89f19-21ec-44b6-8c7f-f0968db841be-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dk6q\" (UniqueName: \"kubernetes.io/projected/efb05803-c39a-4610-9448-5950f6aa84f0-kube-api-access-7dk6q\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-csi-data-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311098 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgp9r\" (UniqueName: \"kubernetes.io/projected/80bbbc08-a5ce-4139-837b-ad932e055904-kube-api-access-kgp9r\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311120 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-plugins-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-auth-proxy-config\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311207 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20979785-edd8-4fca-96b2-3c7eb89cce18-metrics-tls\") pod \"dns-operator-744455d44c-sgpzc\" (UID: \"20979785-edd8-4fca-96b2-3c7eb89cce18\") " pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311222 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-config\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqxs\" (UniqueName: \"kubernetes.io/projected/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-kube-api-access-xxqxs\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7023131d-ca42-4f25-a67c-80007823bf08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/579a9e43-c282-4a46-ab4a-4b4564a8344f-metrics-tls\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f116ab69-14f9-4136-904c-730947658d83-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.311535 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902663e2-9d1b-47a2-af8b-fcd67c717b70-config-volume\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.312032 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.312869 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74b89f19-21ec-44b6-8c7f-f0968db841be-srv-cert\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.312950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313075 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4czl\" (UniqueName: \"kubernetes.io/projected/902663e2-9d1b-47a2-af8b-fcd67c717b70-kube-api-access-p4czl\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313259 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-config\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313628 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpv7\" (UniqueName: \"kubernetes.io/projected/09f62199-a975-4b50-8872-2e9c47b174ec-kube-api-access-xfpv7\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-proxy-tls\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d3359901-3a4a-41e2-8903-cc77b459a563-certs\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313904 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313935 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1efa337-030b-4790-be14-301fd44a869c-config-volume\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.313999 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5v8\" (UniqueName: \"kubernetes.io/projected/b330063a-01a9-4719-9975-726b52f04189-kube-api-access-zs5v8\") pod \"migrator-59844c95c7-5dsln\" (UID: \"b330063a-01a9-4719-9975-726b52f04189\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314088 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9611f809-5d0e-47d9-90ad-b0799b4b786b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gmnmv\" (UID: \"9611f809-5d0e-47d9-90ad-b0799b4b786b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314113 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8q8\" (UniqueName: \"kubernetes.io/projected/50e0dc7f-981b-4ed6-999a-75ca3b351704-kube-api-access-sx8q8\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s92vn\" (UniqueName: \"kubernetes.io/projected/80be9928-86a1-457d-b9b8-62a5a455362a-kube-api-access-s92vn\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810332c8-987e-485f-9940-d1b61944b1a8-audit-dir\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314186 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rncq\" (UniqueName: \"kubernetes.io/projected/20979785-edd8-4fca-96b2-3c7eb89cce18-kube-api-access-2rncq\") pod \"dns-operator-744455d44c-sgpzc\" (UID: \"20979785-edd8-4fca-96b2-3c7eb89cce18\") " pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9704f9c7-174a-4eb5-9045-2cc38c248bdc-signing-key\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314377 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810332c8-987e-485f-9940-d1b61944b1a8-audit-dir\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314462 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efb05803-c39a-4610-9448-5950f6aa84f0-serving-cert\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/80be9928-86a1-457d-b9b8-62a5a455362a-proxy-tls\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-service-ca\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.314958 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f62199-a975-4b50-8872-2e9c47b174ec-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315099 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-mountpoint-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315132 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-socket-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-trusted-ca\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315313 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9704f9c7-174a-4eb5-9045-2cc38c248bdc-signing-cabundle\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315394 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d3359901-3a4a-41e2-8903-cc77b459a563-node-bootstrap-token\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb0ef16c-4892-4013-9fb2-0826a86ee88c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9rt6c\" (UID: \"cb0ef16c-4892-4013-9fb2-0826a86ee88c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-images\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315496 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-service-ca\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b818454-cee7-4a54-a628-1358afa71ef8-srv-cert\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d9a6109-4066-468d-ba64-f28d15274e91-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.315699 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5pcp\" (UniqueName: \"kubernetes.io/projected/d3359901-3a4a-41e2-8903-cc77b459a563-kube-api-access-w5pcp\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.316007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92rr\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-kube-api-access-t92rr\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.316775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b818454-cee7-4a54-a628-1358afa71ef8-profile-collector-cert\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.316821 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80bbbc08-a5ce-4139-837b-ad932e055904-config\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.317394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-bound-sa-token\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.318599 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efb05803-c39a-4610-9448-5950f6aa84f0-serving-cert\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.319164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-trusted-ca\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.319358 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.319611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20979785-edd8-4fca-96b2-3c7eb89cce18-metrics-tls\") pod \"dns-operator-744455d44c-sgpzc\" (UID: \"20979785-edd8-4fca-96b2-3c7eb89cce18\") " pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.319661 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320514 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320559 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-client\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320642 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320671 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2lc\" (UniqueName: \"kubernetes.io/projected/810332c8-987e-485f-9940-d1b61944b1a8-kube-api-access-9k2lc\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-audit-policies\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320816 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7023131d-ca42-4f25-a67c-80007823bf08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320835 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdpg\" (UniqueName: \"kubernetes.io/projected/74b89f19-21ec-44b6-8c7f-f0968db841be-kube-api-access-lrdpg\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320902 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wrc\" (UniqueName: \"kubernetes.io/projected/f116ab69-14f9-4136-904c-730947658d83-kube-api-access-72wrc\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d9a6109-4066-468d-ba64-f28d15274e91-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.320977 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.321048 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.321069 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80be9928-86a1-457d-b9b8-62a5a455362a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.321117 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrl4\" (UniqueName: \"kubernetes.io/projected/c1efa337-030b-4790-be14-301fd44a869c-kube-api-access-vrrl4\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.321651 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:18.821636786 +0000 UTC m=+160.472961188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.322675 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.322864 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.323133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.323167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.323630 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80be9928-86a1-457d-b9b8-62a5a455362a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.325304 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.325322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-tls\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.325335 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.325365 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/80be9928-86a1-457d-b9b8-62a5a455362a-proxy-tls\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.327010 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.329998 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efb05803-c39a-4610-9448-5950f6aa84f0-etcd-client\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.330021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.335609 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-audit-policies\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.363824 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.364013 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.371088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dk6q\" (UniqueName: \"kubernetes.io/projected/efb05803-c39a-4610-9448-5950f6aa84f0-kube-api-access-7dk6q\") pod \"etcd-operator-b45778765-rcfzx\" (UID: \"efb05803-c39a-4610-9448-5950f6aa84f0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.376947 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb023c77-9c18-4fb9-a6cd-53992aec9a4d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p2ckx\" (UID: \"eb023c77-9c18-4fb9-a6cd-53992aec9a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.390406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s92vn\" (UniqueName: \"kubernetes.io/projected/80be9928-86a1-457d-b9b8-62a5a455362a-kube-api-access-s92vn\") pod \"machine-config-controller-84d6567774-bhf48\" (UID: \"80be9928-86a1-457d-b9b8-62a5a455362a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.404587 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rncq\" (UniqueName: \"kubernetes.io/projected/20979785-edd8-4fca-96b2-3c7eb89cce18-kube-api-access-2rncq\") pod \"dns-operator-744455d44c-sgpzc\" (UID: \"20979785-edd8-4fca-96b2-3c7eb89cce18\") " pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.416841 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.421909 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9704f9c7-174a-4eb5-9045-2cc38c248bdc-signing-cabundle\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422163 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb0ef16c-4892-4013-9fb2-0826a86ee88c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9rt6c\" (UID: \"cb0ef16c-4892-4013-9fb2-0826a86ee88c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422186 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d3359901-3a4a-41e2-8903-cc77b459a563-node-bootstrap-token\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-images\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422235 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b818454-cee7-4a54-a628-1358afa71ef8-srv-cert\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422254 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d9a6109-4066-468d-ba64-f28d15274e91-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422277 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5pcp\" (UniqueName: \"kubernetes.io/projected/d3359901-3a4a-41e2-8903-cc77b459a563-kube-api-access-w5pcp\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422323 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80bbbc08-a5ce-4139-837b-ad932e055904-config\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422347 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b818454-cee7-4a54-a628-1358afa71ef8-profile-collector-cert\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422388 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7023131d-ca42-4f25-a67c-80007823bf08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422416 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdpg\" (UniqueName: \"kubernetes.io/projected/74b89f19-21ec-44b6-8c7f-f0968db841be-kube-api-access-lrdpg\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422458 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wrc\" (UniqueName: \"kubernetes.io/projected/f116ab69-14f9-4136-904c-730947658d83-kube-api-access-72wrc\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d9a6109-4066-468d-ba64-f28d15274e91-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422545 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrl4\" (UniqueName: \"kubernetes.io/projected/c1efa337-030b-4790-be14-301fd44a869c-kube-api-access-vrrl4\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422571 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlzg\" (UniqueName: \"kubernetes.io/projected/579a9e43-c282-4a46-ab4a-4b4564a8344f-kube-api-access-9qlzg\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d9a6109-4066-468d-ba64-f28d15274e91-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902663e2-9d1b-47a2-af8b-fcd67c717b70-secret-volume\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/652f5322-b51b-4901-86e6-119f342d3c7c-cert\") pod \"ingress-canary-plzjs\" (UID: \"652f5322-b51b-4901-86e6-119f342d3c7c\") " pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7023131d-ca42-4f25-a67c-80007823bf08-config\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422707 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/579a9e43-c282-4a46-ab4a-4b4564a8344f-trusted-ca\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6f6\" (UniqueName: \"kubernetes.io/projected/cb0ef16c-4892-4013-9fb2-0826a86ee88c-kube-api-access-bw6f6\") pod \"multus-admission-controller-857f4d67dd-9rt6c\" (UID: \"cb0ef16c-4892-4013-9fb2-0826a86ee88c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f116ab69-14f9-4136-904c-730947658d83-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422785 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvzg\" (UniqueName: \"kubernetes.io/projected/7b818454-cee7-4a54-a628-1358afa71ef8-kube-api-access-vwvzg\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422807 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80bbbc08-a5ce-4139-837b-ad932e055904-serving-cert\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/579a9e43-c282-4a46-ab4a-4b4564a8344f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422854 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hw5l\" (UniqueName: \"kubernetes.io/projected/9611f809-5d0e-47d9-90ad-b0799b4b786b-kube-api-access-8hw5l\") pod \"control-plane-machine-set-operator-78cbb6b69f-gmnmv\" (UID: \"9611f809-5d0e-47d9-90ad-b0799b4b786b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-registration-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422904 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09f62199-a975-4b50-8872-2e9c47b174ec-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mdlh\" (UniqueName: \"kubernetes.io/projected/652f5322-b51b-4901-86e6-119f342d3c7c-kube-api-access-7mdlh\") pod \"ingress-canary-plzjs\" (UID: \"652f5322-b51b-4901-86e6-119f342d3c7c\") " pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhrc\" (UniqueName: \"kubernetes.io/projected/9704f9c7-174a-4eb5-9045-2cc38c248bdc-kube-api-access-wdhrc\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.422990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1efa337-030b-4790-be14-301fd44a869c-metrics-tls\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74b89f19-21ec-44b6-8c7f-f0968db841be-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423039 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-csi-data-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-auth-proxy-config\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423084 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgp9r\" (UniqueName: \"kubernetes.io/projected/80bbbc08-a5ce-4139-837b-ad932e055904-kube-api-access-kgp9r\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423106 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-plugins-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqxs\" (UniqueName: \"kubernetes.io/projected/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-kube-api-access-xxqxs\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7023131d-ca42-4f25-a67c-80007823bf08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423171 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f116ab69-14f9-4136-904c-730947658d83-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902663e2-9d1b-47a2-af8b-fcd67c717b70-config-volume\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/579a9e43-c282-4a46-ab4a-4b4564a8344f-metrics-tls\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423234 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74b89f19-21ec-44b6-8c7f-f0968db841be-srv-cert\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4czl\" (UniqueName: \"kubernetes.io/projected/902663e2-9d1b-47a2-af8b-fcd67c717b70-kube-api-access-p4czl\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpv7\" (UniqueName: \"kubernetes.io/projected/09f62199-a975-4b50-8872-2e9c47b174ec-kube-api-access-xfpv7\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d3359901-3a4a-41e2-8903-cc77b459a563-certs\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-proxy-tls\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1efa337-030b-4790-be14-301fd44a869c-config-volume\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423372 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9611f809-5d0e-47d9-90ad-b0799b4b786b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gmnmv\" (UID: \"9611f809-5d0e-47d9-90ad-b0799b4b786b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423397 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8q8\" (UniqueName: \"kubernetes.io/projected/50e0dc7f-981b-4ed6-999a-75ca3b351704-kube-api-access-sx8q8\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423422 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5v8\" (UniqueName: \"kubernetes.io/projected/b330063a-01a9-4719-9975-726b52f04189-kube-api-access-zs5v8\") pod \"migrator-59844c95c7-5dsln\" (UID: \"b330063a-01a9-4719-9975-726b52f04189\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423466 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9704f9c7-174a-4eb5-9045-2cc38c248bdc-signing-key\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423497 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f62199-a975-4b50-8872-2e9c47b174ec-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423529 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-socket-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423551 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-mountpoint-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.423643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-mountpoint-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.426881 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-csi-data-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.427725 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d9a6109-4066-468d-ba64-f28d15274e91-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.430520 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-auth-proxy-config\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.431318 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902663e2-9d1b-47a2-af8b-fcd67c717b70-secret-volume\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.431469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7023131d-ca42-4f25-a67c-80007823bf08-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.432079 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-plugins-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.434780 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:18.934758472 +0000 UTC m=+160.586082874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.435320 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/652f5322-b51b-4901-86e6-119f342d3c7c-cert\") pod \"ingress-canary-plzjs\" (UID: \"652f5322-b51b-4901-86e6-119f342d3c7c\") " pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.435431 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9704f9c7-174a-4eb5-9045-2cc38c248bdc-signing-cabundle\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.435781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cb0ef16c-4892-4013-9fb2-0826a86ee88c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9rt6c\" (UID: \"cb0ef16c-4892-4013-9fb2-0826a86ee88c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.435905 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7023131d-ca42-4f25-a67c-80007823bf08-config\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.438276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80bbbc08-a5ce-4139-837b-ad932e055904-config\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.439634 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f116ab69-14f9-4136-904c-730947658d83-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.440329 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.443278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902663e2-9d1b-47a2-af8b-fcd67c717b70-config-volume\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.444057 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-registration-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.444628 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f116ab69-14f9-4136-904c-730947658d83-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.444873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/50e0dc7f-981b-4ed6-999a-75ca3b351704-socket-dir\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.445433 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1efa337-030b-4790-be14-301fd44a869c-config-volume\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.445701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/579a9e43-c282-4a46-ab4a-4b4564a8344f-metrics-tls\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.447032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b818454-cee7-4a54-a628-1358afa71ef8-profile-collector-cert\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.448403 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d9a6109-4066-468d-ba64-f28d15274e91-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.448977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-images\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.451515 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b818454-cee7-4a54-a628-1358afa71ef8-srv-cert\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.451559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" event={"ID":"6d523e68-740e-4514-a2eb-40ada703a657","Type":"ContainerStarted","Data":"ea88742da0337f469fb0876d92f7f8edfab50b669a1bba2a18f56113a04988c8"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.451599 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" event={"ID":"6d523e68-740e-4514-a2eb-40ada703a657","Type":"ContainerStarted","Data":"3fd92e6bd0738edcc64aa6fdb9e69008332b7b096cbbbe6e5874cc2169ec6eb5"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.451901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f62199-a975-4b50-8872-2e9c47b174ec-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.452330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/579a9e43-c282-4a46-ab4a-4b4564a8344f-trusted-ca\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.453807 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9611f809-5d0e-47d9-90ad-b0799b4b786b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gmnmv\" (UID: \"9611f809-5d0e-47d9-90ad-b0799b4b786b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.454667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.455308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1efa337-030b-4790-be14-301fd44a869c-metrics-tls\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.456361 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d3359901-3a4a-41e2-8903-cc77b459a563-node-bootstrap-token\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.456858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9704f9c7-174a-4eb5-9045-2cc38c248bdc-signing-key\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.457381 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09f62199-a975-4b50-8872-2e9c47b174ec-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.458848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80bbbc08-a5ce-4139-837b-ad932e055904-serving-cert\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.460354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-proxy-tls\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.460541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92rr\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-kube-api-access-t92rr\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.461077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74b89f19-21ec-44b6-8c7f-f0968db841be-profile-collector-cert\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.461092 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74b89f19-21ec-44b6-8c7f-f0968db841be-srv-cert\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.462277 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d3359901-3a4a-41e2-8903-cc77b459a563-certs\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.465230 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.469506 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" event={"ID":"1713b903-c20e-4d3d-8c23-a27712d98f28","Type":"ContainerStarted","Data":"189695ce308aed2d33633f9fa7d2a17faa2a13a17b0837e10f061d869bdf54b5"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.469553 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" event={"ID":"1713b903-c20e-4d3d-8c23-a27712d98f28","Type":"ContainerStarted","Data":"39d84975a3c88f455be7707e4344a865d14d88a314732080010e612ccd3213d5"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.472048 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-rrz8c"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.474920 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.477468 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-bound-sa-token\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.482207 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ghfvn" event={"ID":"3005a862-7d67-4c98-9706-fca9dfc75ba0","Type":"ContainerStarted","Data":"c4667a0ba9a93a2eb1ee8632d74024dad002984e0c6790825e7db15127b106d1"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.482245 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ghfvn" event={"ID":"3005a862-7d67-4c98-9706-fca9dfc75ba0","Type":"ContainerStarted","Data":"212342f3d5a6df53581242f9fbe4bd6f91db748c760b7fde32713b6b5f757747"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.483195 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.488733 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" event={"ID":"6e9f3d52-0137-4dc9-9306-970401a0f7af","Type":"ContainerStarted","Data":"491ba252d1e2837611afbd4387c77dffb21164fce5ca6a8473c6b908fb1ea55a"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.488814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" event={"ID":"6e9f3d52-0137-4dc9-9306-970401a0f7af","Type":"ContainerStarted","Data":"96a074cdde2897134952c716a1b41e3f6c962c9baeb89368c277ad8b0aa4642e"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.489110 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.491065 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.491101 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.491866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" event={"ID":"76d19e5b-7dc6-48cb-946e-52f510d988ae","Type":"ContainerStarted","Data":"a54dbcac4f8ac9bf6410e8d21e2b85c46309d4360d3f375cadefaf0285976cec"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.493562 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" event={"ID":"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668","Type":"ContainerStarted","Data":"e9466ba4f2504014c166c8060b7060c76900981177af15816ed009ebf3427401"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.493584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" event={"ID":"2c3f34ad-6667-4cd7-a786-b2cbdc9f1668","Type":"ContainerStarted","Data":"8ad868915e6fa05c5d9bc32064cf1a18b138e06ea6e49fad5a4bfca885ea04ad"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.495764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2lc\" (UniqueName: \"kubernetes.io/projected/810332c8-987e-485f-9940-d1b61944b1a8-kube-api-access-9k2lc\") pod \"oauth-openshift-558db77b4-pbxl9\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.499114 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zrj2p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.499168 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.500896 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" event={"ID":"de88d5a6-e83d-469a-906d-1b56d17a6be9","Type":"ContainerStarted","Data":"9f497f5080c71781ef1e18da34624c305f61850c3c64fc78495d5f2efa987398"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.500949 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" event={"ID":"de88d5a6-e83d-469a-906d-1b56d17a6be9","Type":"ContainerStarted","Data":"d3c38b87a3b92db68fed57afec88cbe753145baa394b853951675f0b239e6880"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.502218 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.505734 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" event={"ID":"3420db2b-d99f-4c35-9423-1c2db40ac8da","Type":"ContainerStarted","Data":"8b46dbbc96dd54a56613220cc713ded7ed463b680f16630d43fe9ee83ea93124"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.505773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" event={"ID":"3420db2b-d99f-4c35-9423-1c2db40ac8da","Type":"ContainerStarted","Data":"157b489422808d2b6c6375d585200d4bc3b4c0f73c85b2d05ad58cf2db0b3603"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.506364 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.512191 4833 generic.go:334] "Generic (PLEG): container finished" podID="7a45b5f3-7e9f-47b5-87eb-457c581c9fff" containerID="fbe894358afb0eb26db415eda06d24880a76d8088d82e18be6f5b76138d56951" exitCode=0 Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.512315 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" event={"ID":"7a45b5f3-7e9f-47b5-87eb-457c581c9fff","Type":"ContainerDied","Data":"fbe894358afb0eb26db415eda06d24880a76d8088d82e18be6f5b76138d56951"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.512325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdpg\" (UniqueName: \"kubernetes.io/projected/74b89f19-21ec-44b6-8c7f-f0968db841be-kube-api-access-lrdpg\") pod \"olm-operator-6b444d44fb-t7dr8\" (UID: \"74b89f19-21ec-44b6-8c7f-f0968db841be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.512351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" event={"ID":"7a45b5f3-7e9f-47b5-87eb-457c581c9fff","Type":"ContainerStarted","Data":"b8711802ae44a9084902082b90b5083ab89b7f023d52d17cf20bfff8150d1a1f"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.518623 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qs4fq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.518630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p25n6" event={"ID":"9a6599b1-062c-49b9-96fd-c6ddf5464938","Type":"ContainerStarted","Data":"55a0ec942a04ace5b3a4a8bf7955f876b815aeed129ea089cf26234ff1092ba5"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.518689 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.524819 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.525282 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.025264111 +0000 UTC m=+160.676588513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.530216 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" event={"ID":"a5ddd293-64a8-4907-a786-a4c0b5d57ab9","Type":"ContainerStarted","Data":"c49c41ed94f27b6bdd6fd39baa928dec43a8ebd59d23161bc2bb8e0eb0ca10b2"} Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.537482 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wrc\" (UniqueName: \"kubernetes.io/projected/f116ab69-14f9-4136-904c-730947658d83-kube-api-access-72wrc\") pod \"marketplace-operator-79b997595-czv2v\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.553147 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.570731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrl4\" (UniqueName: \"kubernetes.io/projected/c1efa337-030b-4790-be14-301fd44a869c-kube-api-access-vrrl4\") pod \"dns-default-xcpgj\" (UID: \"c1efa337-030b-4790-be14-301fd44a869c\") " pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.587926 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.596752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlzg\" (UniqueName: \"kubernetes.io/projected/579a9e43-c282-4a46-ab4a-4b4564a8344f-kube-api-access-9qlzg\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.609982 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4czl\" (UniqueName: \"kubernetes.io/projected/902663e2-9d1b-47a2-af8b-fcd67c717b70-kube-api-access-p4czl\") pod \"collect-profiles-29492040-5qwsn\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.627311 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.629150 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.129129064 +0000 UTC m=+160.780453466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.630928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgp9r\" (UniqueName: \"kubernetes.io/projected/80bbbc08-a5ce-4139-837b-ad932e055904-kube-api-access-kgp9r\") pod \"service-ca-operator-777779d784-4qx7h\" (UID: \"80bbbc08-a5ce-4139-837b-ad932e055904\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.638846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.652514 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqxs\" (UniqueName: \"kubernetes.io/projected/0fdbd9ee-ce4e-4dce-98ac-670862cdf5da-kube-api-access-xxqxs\") pod \"machine-config-operator-74547568cd-84mnv\" (UID: \"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.670167 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.676322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7023131d-ca42-4f25-a67c-80007823bf08-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9p6ff\" (UID: \"7023131d-ca42-4f25-a67c-80007823bf08\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.691265 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d9a6109-4066-468d-ba64-f28d15274e91-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-f9mp8\" (UID: \"6d9a6109-4066-468d-ba64-f28d15274e91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.707749 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.717893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5pcp\" (UniqueName: \"kubernetes.io/projected/d3359901-3a4a-41e2-8903-cc77b459a563-kube-api-access-w5pcp\") pod \"machine-config-server-gccvh\" (UID: \"d3359901-3a4a-41e2-8903-cc77b459a563\") " pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.770648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.770930 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.270918843 +0000 UTC m=+160.922243245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.788611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8q8\" (UniqueName: \"kubernetes.io/projected/50e0dc7f-981b-4ed6-999a-75ca3b351704-kube-api-access-sx8q8\") pod \"csi-hostpathplugin-bs9ln\" (UID: \"50e0dc7f-981b-4ed6-999a-75ca3b351704\") " pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.790815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hw5l\" (UniqueName: \"kubernetes.io/projected/9611f809-5d0e-47d9-90ad-b0799b4b786b-kube-api-access-8hw5l\") pod \"control-plane-machine-set-operator-78cbb6b69f-gmnmv\" (UID: \"9611f809-5d0e-47d9-90ad-b0799b4b786b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.791430 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6f6\" (UniqueName: \"kubernetes.io/projected/cb0ef16c-4892-4013-9fb2-0826a86ee88c-kube-api-access-bw6f6\") pod \"multus-admission-controller-857f4d67dd-9rt6c\" (UID: \"cb0ef16c-4892-4013-9fb2-0826a86ee88c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.800164 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.802920 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpv7\" (UniqueName: \"kubernetes.io/projected/09f62199-a975-4b50-8872-2e9c47b174ec-kube-api-access-xfpv7\") pod \"kube-storage-version-migrator-operator-b67b599dd-62vh8\" (UID: \"09f62199-a975-4b50-8872-2e9c47b174ec\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.808035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5v8\" (UniqueName: \"kubernetes.io/projected/b330063a-01a9-4719-9975-726b52f04189-kube-api-access-zs5v8\") pod \"migrator-59844c95c7-5dsln\" (UID: \"b330063a-01a9-4719-9975-726b52f04189\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.810254 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.820788 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.834596 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvzg\" (UniqueName: \"kubernetes.io/projected/7b818454-cee7-4a54-a628-1358afa71ef8-kube-api-access-vwvzg\") pod \"catalog-operator-68c6474976-mn2nx\" (UID: \"7b818454-cee7-4a54-a628-1358afa71ef8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.837880 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.845061 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.848664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mdlh\" (UniqueName: \"kubernetes.io/projected/652f5322-b51b-4901-86e6-119f342d3c7c-kube-api-access-7mdlh\") pod \"ingress-canary-plzjs\" (UID: \"652f5322-b51b-4901-86e6-119f342d3c7c\") " pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.854456 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.868848 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.870247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/579a9e43-c282-4a46-ab4a-4b4564a8344f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9xp9d\" (UID: \"579a9e43-c282-4a46-ab4a-4b4564a8344f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.871828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.872404 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.372385211 +0000 UTC m=+161.023709703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.879665 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.889371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhrc\" (UniqueName: \"kubernetes.io/projected/9704f9c7-174a-4eb5-9045-2cc38c248bdc-kube-api-access-wdhrc\") pod \"service-ca-9c57cc56f-n2rfz\" (UID: \"9704f9c7-174a-4eb5-9045-2cc38c248bdc\") " pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.892833 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.911011 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48"] Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.918682 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.929523 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-plzjs" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.947620 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gccvh" Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.976379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:18 crc kubenswrapper[4833]: E0127 14:14:18.976771 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.476754998 +0000 UTC m=+161.128079400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:18 crc kubenswrapper[4833]: I0127 14:14:18.982092 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-v59z8"] Jan 27 14:14:19 crc kubenswrapper[4833]: W0127 14:14:19.037180 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80be9928_86a1_457d_b9b8_62a5a455362a.slice/crio-3623ccb4882b58df2f1cf7d48762c4a239f67b052691f19e0048628d9ec146bf WatchSource:0}: Error finding container 3623ccb4882b58df2f1cf7d48762c4a239f67b052691f19e0048628d9ec146bf: Status 404 returned error can't find the container with id 3623ccb4882b58df2f1cf7d48762c4a239f67b052691f19e0048628d9ec146bf Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.041746 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.076574 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-sgpzc"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.077059 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.077421 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.577398644 +0000 UTC m=+161.228723056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.077556 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.077877 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.577866526 +0000 UTC m=+161.229190928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.082231 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.090725 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.093079 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.117397 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.129927 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.161816 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.162646 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.173488 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rcfzx"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.180299 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.180483 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.680455815 +0000 UTC m=+161.331780217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.180677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.181002 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.68098901 +0000 UTC m=+161.332313412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.203909 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pbxl9"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.281372 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.282069 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.782054187 +0000 UTC m=+161.433378589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: W0127 14:14:19.302377 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod810332c8_987e_485f_9940_d1b61944b1a8.slice/crio-390bad0051d4ea4277ca5a892682754120dfb7df134d93d78c6bb641c137b6d2 WatchSource:0}: Error finding container 390bad0051d4ea4277ca5a892682754120dfb7df134d93d78c6bb641c137b6d2: Status 404 returned error can't find the container with id 390bad0051d4ea4277ca5a892682754120dfb7df134d93d78c6bb641c137b6d2 Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.382991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.383408 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.883394831 +0000 UTC m=+161.534719233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.428928 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xcpgj"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.483741 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.484084 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.984052098 +0000 UTC m=+161.635376520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.484401 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.484908 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:19.984890211 +0000 UTC m=+161.636214603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.552381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p25n6" event={"ID":"9a6599b1-062c-49b9-96fd-c6ddf5464938","Type":"ContainerStarted","Data":"f6524b6ae1a44bbd773bd8775dd192867184f542c8ddea6950d38844577d7bc4"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.579668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" event={"ID":"d53bc718-d5ff-48e8-baa2-a1068fdba801","Type":"ContainerStarted","Data":"f1b8ecb7034bfbc217e95bf653e745100a63bb2573294006e67925ddf0fecdd1"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.579738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" event={"ID":"d53bc718-d5ff-48e8-baa2-a1068fdba801","Type":"ContainerStarted","Data":"84ed550303d14aa0d4904f822f2bdedb15d6849ace04c134a709fc3caa31b54e"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.586371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.586855 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.086814391 +0000 UTC m=+161.738138813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.587146 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.587626 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.087616833 +0000 UTC m=+161.738941235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.598556 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" event={"ID":"810332c8-987e-485f-9940-d1b61944b1a8","Type":"ContainerStarted","Data":"390bad0051d4ea4277ca5a892682754120dfb7df134d93d78c6bb641c137b6d2"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.613416 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" event={"ID":"efb05803-c39a-4610-9448-5950f6aa84f0","Type":"ContainerStarted","Data":"2c7a146cc8cfd436778e34c0bb7b43fe88e4ac203e5e14a129a52de53a6c6521"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.622128 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" event={"ID":"76d19e5b-7dc6-48cb-946e-52f510d988ae","Type":"ContainerStarted","Data":"e8175db4b2566c9cda27d6f8c3989c74d6adf106519a42622dd2f59d9b90e3f2"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.640952 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" event={"ID":"20979785-edd8-4fca-96b2-3c7eb89cce18","Type":"ContainerStarted","Data":"8c7a14939a401db7978796eff15e126d7bd496b6223a96c1d283c9142c82f2ce"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.656563 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" event={"ID":"80be9928-86a1-457d-b9b8-62a5a455362a","Type":"ContainerStarted","Data":"3623ccb4882b58df2f1cf7d48762c4a239f67b052691f19e0048628d9ec146bf"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.669110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" event={"ID":"84fae95f-8d24-4b0b-ab4d-a73565d4b64e","Type":"ContainerStarted","Data":"e04205398be688d78dc14adbc9fc119edd2a9781ae8dd3f2319d649ca6002016"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.673911 4833 generic.go:334] "Generic (PLEG): container finished" podID="a5ddd293-64a8-4907-a786-a4c0b5d57ab9" containerID="076b9366337b03a532f8c6eb1a9ae49ad24be280eaefe4bc82a571259b3923b0" exitCode=0 Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.674005 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" event={"ID":"a5ddd293-64a8-4907-a786-a4c0b5d57ab9","Type":"ContainerDied","Data":"076b9366337b03a532f8c6eb1a9ae49ad24be280eaefe4bc82a571259b3923b0"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.680540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" event={"ID":"7023131d-ca42-4f25-a67c-80007823bf08","Type":"ContainerStarted","Data":"ef251942300fe5f39c03760263d4b1464e0f07392af50469e7aaba8922db39e2"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.686989 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" event={"ID":"1713b903-c20e-4d3d-8c23-a27712d98f28","Type":"ContainerStarted","Data":"271b037f52f28afee56badc93a8fa4b880d5880323cae43b9013a4ccf29b7292"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.688134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.688241 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.188201057 +0000 UTC m=+161.839525459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.688710 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.689862 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.189848842 +0000 UTC m=+161.841173244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.693052 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" event={"ID":"6d523e68-740e-4514-a2eb-40ada703a657","Type":"ContainerStarted","Data":"95cd04992d9b29affff1b253b3b6ad579c42f65763c8553a6352a96820fc3a49"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.704554 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gccvh" event={"ID":"d3359901-3a4a-41e2-8903-cc77b459a563","Type":"ContainerStarted","Data":"acde5350183da0277e00affb0868d6cb368ca1eb0d1aab7417235395e56cfe42"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.716747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v59z8" event={"ID":"3ed10be6-6292-4eed-abf0-14117bc24266","Type":"ContainerStarted","Data":"136074a02d92e69794b5b8106c8be4736fd6f6dadc470da903ce84968d8e99a4"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.719535 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" event={"ID":"74b89f19-21ec-44b6-8c7f-f0968db841be","Type":"ContainerStarted","Data":"1c93b9547d9ac7a2569305bb4053086d9b4ea412d47ac8c9d9e18334999462a9"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.722610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" event={"ID":"971371ea-478f-4f96-9693-9aa9a8897a38","Type":"ContainerStarted","Data":"7b8b72e55940f8d97f7ef13f3d7bab0d135cd9209a9b18cf5eaeef08583a8971"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.722671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" event={"ID":"971371ea-478f-4f96-9693-9aa9a8897a38","Type":"ContainerStarted","Data":"d67d5f4b1b29547fcccaf764c8ab6a23fb098a57c68b85bfa4607a7711419ead"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.723690 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.726121 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" event={"ID":"2b22849d-9632-4f0a-96f4-997aa91300eb","Type":"ContainerStarted","Data":"1ee1e95db537295ecb54b380573080cfede7347363d6c64add132bf6a8824336"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.726171 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" event={"ID":"2b22849d-9632-4f0a-96f4-997aa91300eb","Type":"ContainerStarted","Data":"09d14ae47aac91ac4e0ee4c70ed2212bacceb08793106e075050c67a89781ae2"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.730132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" event={"ID":"679aa60c-9c8d-4596-81b3-b582dd821f2f","Type":"ContainerStarted","Data":"c2dc3052eff9abe15e9e69d5021173ed48809c787687bee0d543d1f0acfb256b"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.730183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" event={"ID":"679aa60c-9c8d-4596-81b3-b582dd821f2f","Type":"ContainerStarted","Data":"29cccf61dd470f1680e99a57d5091fb21fb17b60aaf6c044ecf0a98b8c5dd8f0"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.730273 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lgnww container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.730560 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" podUID="971371ea-478f-4f96-9693-9aa9a8897a38" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.732287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" event={"ID":"eb023c77-9c18-4fb9-a6cd-53992aec9a4d","Type":"ContainerStarted","Data":"a86e3096149a27e44063be8afd787e51a2148ad9c306aa7c91d4ff99f5a82a4f"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.748299 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-plzjs"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.789787 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.793523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" event={"ID":"7a45b5f3-7e9f-47b5-87eb-457c581c9fff","Type":"ContainerStarted","Data":"6ef4bcf6c72fb6180c78c25ee75e314e835f59a2fd2a320bb1be6a08a3cbb8d1"} Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.793642 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.293617563 +0000 UTC m=+161.944941975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.803843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rrz8c" event={"ID":"15e99fdb-21ba-4e48-a4d3-6e93f9907413","Type":"ContainerStarted","Data":"617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.803897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rrz8c" event={"ID":"15e99fdb-21ba-4e48-a4d3-6e93f9907413","Type":"ContainerStarted","Data":"b042f8ff6c135461120e16f04d5d375d85b6bac698d079a8f35b0477a8012337"} Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.804956 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zrj2p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.805031 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.808309 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.808369 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.810373 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qs4fq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.810426 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.849660 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-czv2v"] Jan 27 14:14:19 crc kubenswrapper[4833]: W0127 14:14:19.871390 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652f5322_b51b_4901_86e6_119f342d3c7c.slice/crio-6db8748f16fe9779e1460b4013e9ebbce21d8f4f52e7bd1cddba8962309cd7c9 WatchSource:0}: Error finding container 6db8748f16fe9779e1460b4013e9ebbce21d8f4f52e7bd1cddba8962309cd7c9: Status 404 returned error can't find the container with id 6db8748f16fe9779e1460b4013e9ebbce21d8f4f52e7bd1cddba8962309cd7c9 Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.895310 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.895382 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.895398 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9rt6c"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.896621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:19 crc kubenswrapper[4833]: E0127 14:14:19.897182 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.397167608 +0000 UTC m=+162.048492010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.900923 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" podStartSLOduration=137.90090687 podStartE2EDuration="2m17.90090687s" podCreationTimestamp="2026-01-27 14:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:19.882719644 +0000 UTC m=+161.534044046" watchObservedRunningTime="2026-01-27 14:14:19.90090687 +0000 UTC m=+161.552231272" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.912855 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.915033 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv"] Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.915629 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x5778" podStartSLOduration=138.915616541 podStartE2EDuration="2m18.915616541s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:19.91043954 +0000 UTC m=+161.561763942" watchObservedRunningTime="2026-01-27 14:14:19.915616541 +0000 UTC m=+161.566940943" Jan 27 14:14:19 crc kubenswrapper[4833]: W0127 14:14:19.937636 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d9a6109_4066_468d_ba64_f28d15274e91.slice/crio-7f15f3d84e452ffb47a7411e3768fc4460a7b9eed83b8d3bef9d9d9caf6749b1 WatchSource:0}: Error finding container 7f15f3d84e452ffb47a7411e3768fc4460a7b9eed83b8d3bef9d9d9caf6749b1: Status 404 returned error can't find the container with id 7f15f3d84e452ffb47a7411e3768fc4460a7b9eed83b8d3bef9d9d9caf6749b1 Jan 27 14:14:19 crc kubenswrapper[4833]: W0127 14:14:19.970605 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fdbd9ee_ce4e_4dce_98ac_670862cdf5da.slice/crio-93c763344aa77329fca39cf1ebe9b2297334c234500b8b03284a03f8297a2a7c WatchSource:0}: Error finding container 93c763344aa77329fca39cf1ebe9b2297334c234500b8b03284a03f8297a2a7c: Status 404 returned error can't find the container with id 93c763344aa77329fca39cf1ebe9b2297334c234500b8b03284a03f8297a2a7c Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.983225 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-rrz8c" podStartSLOduration=138.983203864 podStartE2EDuration="2m18.983203864s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:19.982637429 +0000 UTC m=+161.633961851" watchObservedRunningTime="2026-01-27 14:14:19.983203864 +0000 UTC m=+161.634528266" Jan 27 14:14:19 crc kubenswrapper[4833]: I0127 14:14:19.997862 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:19.999358 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.499330995 +0000 UTC m=+162.150655427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.101131 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bs9ln"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.106993 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.108618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.109241 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.609226562 +0000 UTC m=+162.260550964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.124668 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.141408 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.142300 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.147158 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.147465 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.147513 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.147610 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" podStartSLOduration=139.14759184 podStartE2EDuration="2m19.14759184s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.109587433 +0000 UTC m=+161.760911835" watchObservedRunningTime="2026-01-27 14:14:20.14759184 +0000 UTC m=+161.798916242" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.165858 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" podStartSLOduration=138.165835787 podStartE2EDuration="2m18.165835787s" podCreationTimestamp="2026-01-27 14:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.12817071 +0000 UTC m=+161.779495112" watchObservedRunningTime="2026-01-27 14:14:20.165835787 +0000 UTC m=+161.817160189" Jan 27 14:14:20 crc kubenswrapper[4833]: W0127 14:14:20.166863 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e0dc7f_981b_4ed6_999a_75ca3b351704.slice/crio-c87cce0a2235edefbe85a431a054833071835220342f0dac7127ff10ae4bdf0b WatchSource:0}: Error finding container c87cce0a2235edefbe85a431a054833071835220342f0dac7127ff10ae4bdf0b: Status 404 returned error can't find the container with id c87cce0a2235edefbe85a431a054833071835220342f0dac7127ff10ae4bdf0b Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.167601 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ghfvn" podStartSLOduration=139.167595045 podStartE2EDuration="2m19.167595045s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.16117783 +0000 UTC m=+161.812502242" watchObservedRunningTime="2026-01-27 14:14:20.167595045 +0000 UTC m=+161.818919447" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.170036 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.189365 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wblbf" podStartSLOduration=139.189342479 podStartE2EDuration="2m19.189342479s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.183371165 +0000 UTC m=+161.834695587" watchObservedRunningTime="2026-01-27 14:14:20.189342479 +0000 UTC m=+161.840666881" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.209113 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.209550 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.709519329 +0000 UTC m=+162.360843731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.225896 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wrnt6" podStartSLOduration=139.225867735 podStartE2EDuration="2m19.225867735s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.225022622 +0000 UTC m=+161.876347024" watchObservedRunningTime="2026-01-27 14:14:20.225867735 +0000 UTC m=+161.877192127" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.243289 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n2rfz"] Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.264728 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-p25n6" podStartSLOduration=139.264710955 podStartE2EDuration="2m19.264710955s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.264354165 +0000 UTC m=+161.915678577" watchObservedRunningTime="2026-01-27 14:14:20.264710955 +0000 UTC m=+161.916035357" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.311025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.311267 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.811248814 +0000 UTC m=+162.462573216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.419982 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.420342 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.920297169 +0000 UTC m=+162.571621581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.420705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.421546 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:20.921534423 +0000 UTC m=+162.572858825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.467794 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" podStartSLOduration=139.467775355 podStartE2EDuration="2m19.467775355s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.466978132 +0000 UTC m=+162.118302524" watchObservedRunningTime="2026-01-27 14:14:20.467775355 +0000 UTC m=+162.119099757" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.506034 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8n5cx" podStartSLOduration=139.506012447 podStartE2EDuration="2m19.506012447s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.503321164 +0000 UTC m=+162.154645566" watchObservedRunningTime="2026-01-27 14:14:20.506012447 +0000 UTC m=+162.157336859" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.534415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.534846 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.034820613 +0000 UTC m=+162.686145015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.636104 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.636928 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.136909348 +0000 UTC m=+162.788233750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.738286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.739770 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.239742024 +0000 UTC m=+162.891066426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.836133 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" event={"ID":"810332c8-987e-485f-9940-d1b61944b1a8","Type":"ContainerStarted","Data":"c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.839806 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c44jl" podStartSLOduration=139.839790973 podStartE2EDuration="2m19.839790973s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:20.783392524 +0000 UTC m=+162.434716926" watchObservedRunningTime="2026-01-27 14:14:20.839790973 +0000 UTC m=+162.491115375" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.840514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.840906 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.340892973 +0000 UTC m=+162.992217375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.847389 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" event={"ID":"679aa60c-9c8d-4596-81b3-b582dd821f2f","Type":"ContainerStarted","Data":"0ef267105168296e96d18bb1f0d51abe2c9a7805cc707554c3a1bc2cb93abfbf"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.848630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" event={"ID":"7b818454-cee7-4a54-a628-1358afa71ef8","Type":"ContainerStarted","Data":"b49f2076ac0c524d65e1a53a22734df06c807f21d778f44390f18c31de76915e"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.849874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xcpgj" event={"ID":"c1efa337-030b-4790-be14-301fd44a869c","Type":"ContainerStarted","Data":"630a62d305d43724efba42c5c711317a321058f58663758a9970c6ba46a0d859"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.849972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xcpgj" event={"ID":"c1efa337-030b-4790-be14-301fd44a869c","Type":"ContainerStarted","Data":"a2ee4455b8aecfd39f6cd2226156fa4c2655fe9eab9f3bfcae0942d3f790633c"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.852218 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" event={"ID":"09f62199-a975-4b50-8872-2e9c47b174ec","Type":"ContainerStarted","Data":"8c129b0497aaf235e2ca25f25e6d91a423b4df0e6decc846e8bc75480194e868"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.852245 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" event={"ID":"09f62199-a975-4b50-8872-2e9c47b174ec","Type":"ContainerStarted","Data":"022b762b83a7ae18af7712cc5874ea830cb0ec8e52a56251f826125456f2729f"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.863084 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.903550 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" event={"ID":"f116ab69-14f9-4136-904c-730947658d83","Type":"ContainerStarted","Data":"4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.903614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" event={"ID":"f116ab69-14f9-4136-904c-730947658d83","Type":"ContainerStarted","Data":"301526166a1d705f3a6a8bfcee80572ffd157706990629b652763455648272ce"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.942290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:20 crc kubenswrapper[4833]: E0127 14:14:20.942700 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.44268409 +0000 UTC m=+163.094008492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.957184 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" event={"ID":"74b89f19-21ec-44b6-8c7f-f0968db841be","Type":"ContainerStarted","Data":"2fa1b2850379235dbabe369b8e0dad300429c462c98003456f3b23b09cd5b311"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.958573 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.968680 4833 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-t7dr8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.968767 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" podUID="74b89f19-21ec-44b6-8c7f-f0968db841be" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.984542 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" event={"ID":"50e0dc7f-981b-4ed6-999a-75ca3b351704","Type":"ContainerStarted","Data":"c87cce0a2235edefbe85a431a054833071835220342f0dac7127ff10ae4bdf0b"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.990788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" event={"ID":"b330063a-01a9-4719-9975-726b52f04189","Type":"ContainerStarted","Data":"00f1fc5ecfc122e0630a60a373072a53d84d29f6a4afef4deceb0fa5fc7b0d45"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.993279 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" event={"ID":"6d9a6109-4066-468d-ba64-f28d15274e91","Type":"ContainerStarted","Data":"7f15f3d84e452ffb47a7411e3768fc4460a7b9eed83b8d3bef9d9d9caf6749b1"} Jan 27 14:14:20 crc kubenswrapper[4833]: I0127 14:14:20.997432 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" event={"ID":"80be9928-86a1-457d-b9b8-62a5a455362a","Type":"ContainerStarted","Data":"1986a500d1cd9cf97579ebf3a7738d02d77c71aff7fb7d1685a3b2ae36965fd7"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.021992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" event={"ID":"9611f809-5d0e-47d9-90ad-b0799b4b786b","Type":"ContainerStarted","Data":"53434d6f536ef186f76d2a39e7a8f06e245224b506fc153205a208efab7d4d62"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.046140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-v59z8" event={"ID":"3ed10be6-6292-4eed-abf0-14117bc24266","Type":"ContainerStarted","Data":"a6a8847f5dffb5264edd5d4e635d28f108e241957a482edc58d9fd95c6229a50"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.046876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.048143 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.548129707 +0000 UTC m=+163.199454109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.048166 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.057597 4833 patch_prober.go:28] interesting pod/console-operator-58897d9998-v59z8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.057672 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-v59z8" podUID="3ed10be6-6292-4eed-abf0-14117bc24266" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.058627 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" event={"ID":"579a9e43-c282-4a46-ab4a-4b4564a8344f","Type":"ContainerStarted","Data":"eca31b657422d347de0b67cf2f30097ca86f156d954153c49e67081553714450"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.072130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" event={"ID":"7023131d-ca42-4f25-a67c-80007823bf08","Type":"ContainerStarted","Data":"5f0c6a79b3fd846b1fcb6ae27593e83721156ef004cc238e1d5ab1eaf42f8985"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.079163 4833 generic.go:334] "Generic (PLEG): container finished" podID="84fae95f-8d24-4b0b-ab4d-a73565d4b64e" containerID="0aef1cbf52e7df772ffde4ad9b29b6e799505ca0f69832c28f21a0fdd369553a" exitCode=0 Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.079285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" event={"ID":"84fae95f-8d24-4b0b-ab4d-a73565d4b64e","Type":"ContainerDied","Data":"0aef1cbf52e7df772ffde4ad9b29b6e799505ca0f69832c28f21a0fdd369553a"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.111085 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" event={"ID":"cb0ef16c-4892-4013-9fb2-0826a86ee88c","Type":"ContainerStarted","Data":"92691aa59241bb29a40ec79fb72fdf90ed4b3a9aa108376f44c879bb998fcd8c"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.125138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" event={"ID":"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da","Type":"ContainerStarted","Data":"93c763344aa77329fca39cf1ebe9b2297334c234500b8b03284a03f8297a2a7c"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.140348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" event={"ID":"efb05803-c39a-4610-9448-5950f6aa84f0","Type":"ContainerStarted","Data":"f12728c201f0e5340a9fe5854985c561bb526d05aac46fcfcde09e71ba717991"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.149974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.153634 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:21 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:21 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:21 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.154062 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.155273 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.655242419 +0000 UTC m=+163.306566821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.155338 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.157030 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.657022397 +0000 UTC m=+163.308346799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.177370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" event={"ID":"80bbbc08-a5ce-4139-837b-ad932e055904","Type":"ContainerStarted","Data":"40a7b0f2a837033a7048ec7dae8a43eb835f6b15cfa67043b0abe0341a9b719a"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.262582 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.268501 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.767043948 +0000 UTC m=+163.418368360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.336781 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gccvh" event={"ID":"d3359901-3a4a-41e2-8903-cc77b459a563","Type":"ContainerStarted","Data":"1e3b63732f85b391365d8ddaa3000159068462a848120d933deabee6288190e1"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.337076 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" event={"ID":"eb023c77-9c18-4fb9-a6cd-53992aec9a4d","Type":"ContainerStarted","Data":"49da9e044a3dd221b42fe3ade29ce8adac4f2ea7c90044fea8356ba6a895b116"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.364312 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.364674 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.864660342 +0000 UTC m=+163.515984744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.391181 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" event={"ID":"76d19e5b-7dc6-48cb-946e-52f510d988ae","Type":"ContainerStarted","Data":"5a2c44c00640b2cc538befb0474c8bacf0bd227a8bd7f21eed0606d3c4f8a942"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.402830 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" event={"ID":"9704f9c7-174a-4eb5-9045-2cc38c248bdc","Type":"ContainerStarted","Data":"3fe394f97a7603eee0cb577e78bee2b82ffd80643dbabf62e75e0117427a8ac9"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.405314 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-plzjs" event={"ID":"652f5322-b51b-4901-86e6-119f342d3c7c","Type":"ContainerStarted","Data":"accd5c28d110a632dd03b6f0ca531cda613391b708bc021f876151261740be57"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.405358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-plzjs" event={"ID":"652f5322-b51b-4901-86e6-119f342d3c7c","Type":"ContainerStarted","Data":"6db8748f16fe9779e1460b4013e9ebbce21d8f4f52e7bd1cddba8962309cd7c9"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.439040 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-v59z8" podStartSLOduration=140.43901826 podStartE2EDuration="2m20.43901826s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.438788344 +0000 UTC m=+163.090112756" watchObservedRunningTime="2026-01-27 14:14:21.43901826 +0000 UTC m=+163.090342662" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.441345 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2ckx" podStartSLOduration=140.441336284 podStartE2EDuration="2m20.441336284s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.384743829 +0000 UTC m=+163.036068231" watchObservedRunningTime="2026-01-27 14:14:21.441336284 +0000 UTC m=+163.092660706" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.467121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.468486 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:21.968438453 +0000 UTC m=+163.619762855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.491559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" event={"ID":"20979785-edd8-4fca-96b2-3c7eb89cce18","Type":"ContainerStarted","Data":"cd5f0de969c9e1140a8dd8829f0071b122f7edaa9c7ba8fb18eede6c3bce396c"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.502687 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gccvh" podStartSLOduration=6.502662057 podStartE2EDuration="6.502662057s" podCreationTimestamp="2026-01-27 14:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.477202653 +0000 UTC m=+163.128527055" watchObservedRunningTime="2026-01-27 14:14:21.502662057 +0000 UTC m=+163.153986459" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.506610 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" podStartSLOduration=140.506588184 podStartE2EDuration="2m20.506588184s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.503650553 +0000 UTC m=+163.154974965" watchObservedRunningTime="2026-01-27 14:14:21.506588184 +0000 UTC m=+163.157912596" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.521811 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" event={"ID":"902663e2-9d1b-47a2-af8b-fcd67c717b70","Type":"ContainerStarted","Data":"006a7de63d015f3496bacee73e237e15e4f73662ce598eee47ddc1ea9ed1afb0"} Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.526651 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.526730 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.531722 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" podStartSLOduration=140.531703778 podStartE2EDuration="2m20.531703778s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.5309972 +0000 UTC m=+163.182321622" watchObservedRunningTime="2026-01-27 14:14:21.531703778 +0000 UTC m=+163.183028180" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.559880 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.569156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.571998 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.071982927 +0000 UTC m=+163.723307329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.592974 4833 csr.go:261] certificate signing request csr-cjdhr is approved, waiting to be issued Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.602787 4833 csr.go:257] certificate signing request csr-cjdhr is issued Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.603557 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9p6ff" podStartSLOduration=140.603540569 podStartE2EDuration="2m20.603540569s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.560160215 +0000 UTC m=+163.211484637" watchObservedRunningTime="2026-01-27 14:14:21.603540569 +0000 UTC m=+163.254864971" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.618650 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rcfzx" podStartSLOduration=140.618620301 podStartE2EDuration="2m20.618620301s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.599866258 +0000 UTC m=+163.251190680" watchObservedRunningTime="2026-01-27 14:14:21.618620301 +0000 UTC m=+163.269944703" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.678458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.679975 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.179958753 +0000 UTC m=+163.831283155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.732130 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" podStartSLOduration=139.732111367 podStartE2EDuration="2m19.732111367s" podCreationTimestamp="2026-01-27 14:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.731483589 +0000 UTC m=+163.382807981" watchObservedRunningTime="2026-01-27 14:14:21.732111367 +0000 UTC m=+163.383435769" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.773669 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" podStartSLOduration=140.773647109 podStartE2EDuration="2m20.773647109s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.767120452 +0000 UTC m=+163.418444844" watchObservedRunningTime="2026-01-27 14:14:21.773647109 +0000 UTC m=+163.424971511" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.780813 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.781131 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.281119734 +0000 UTC m=+163.932444136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.857130 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tngc9" podStartSLOduration=140.857088976 podStartE2EDuration="2m20.857088976s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.856983183 +0000 UTC m=+163.508307595" watchObservedRunningTime="2026-01-27 14:14:21.857088976 +0000 UTC m=+163.508413388" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.857305 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-plzjs" podStartSLOduration=6.857300851 podStartE2EDuration="6.857300851s" podCreationTimestamp="2026-01-27 14:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.815980414 +0000 UTC m=+163.467304836" watchObservedRunningTime="2026-01-27 14:14:21.857300851 +0000 UTC m=+163.508625253" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.883215 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.884082 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.384059001 +0000 UTC m=+164.035383403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.914521 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-b69ft" podStartSLOduration=140.914501692 podStartE2EDuration="2m20.914501692s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:21.913893685 +0000 UTC m=+163.565218087" watchObservedRunningTime="2026-01-27 14:14:21.914501692 +0000 UTC m=+163.565826094" Jan 27 14:14:21 crc kubenswrapper[4833]: I0127 14:14:21.988473 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:21 crc kubenswrapper[4833]: E0127 14:14:21.989388 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.489371104 +0000 UTC m=+164.140695506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.090766 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.091069 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.591054808 +0000 UTC m=+164.242379210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.139339 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:22 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:22 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:22 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.139438 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.192953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.194059 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.694040667 +0000 UTC m=+164.345365069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.294956 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.295242 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.795190448 +0000 UTC m=+164.446514860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.295322 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.295670 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.795632249 +0000 UTC m=+164.446956651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.396725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.396951 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.896920122 +0000 UTC m=+164.548244524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.397567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.397941 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.89792874 +0000 UTC m=+164.549253142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.444269 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.444410 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.499519 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.499750 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:22.999719207 +0000 UTC m=+164.651043609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.499856 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.500200 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.00019321 +0000 UTC m=+164.651517612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.526679 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lgnww container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.526753 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" podUID="971371ea-478f-4f96-9693-9aa9a8897a38" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.528313 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" event={"ID":"6d9a6109-4066-468d-ba64-f28d15274e91","Type":"ContainerStarted","Data":"a1d0b75ecbbd32f80c4c3eed4f730d492ce3aff00a2f3f29c1953d81944ba1ac"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.530428 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" event={"ID":"80be9928-86a1-457d-b9b8-62a5a455362a","Type":"ContainerStarted","Data":"751b9cb0a08fce7f7211730d062b313be150ee0ced1ae5e68e7452a5aa555b13"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.532788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" event={"ID":"84fae95f-8d24-4b0b-ab4d-a73565d4b64e","Type":"ContainerStarted","Data":"a5de57ad1a298f002ef871285154b1e714fedb0467ba0f50d914ea60902e73b0"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.533152 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.534677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" event={"ID":"9611f809-5d0e-47d9-90ad-b0799b4b786b","Type":"ContainerStarted","Data":"e1055b6bc14db69765a0435d952659dc0a026c946c833186aa91354062e2cdab"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.537612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" event={"ID":"b330063a-01a9-4719-9975-726b52f04189","Type":"ContainerStarted","Data":"3777dc404709ae9174509826dc6f902f83bf4306f4f736bcdef01d9a28fd881e"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.537641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" event={"ID":"b330063a-01a9-4719-9975-726b52f04189","Type":"ContainerStarted","Data":"5d937fa5c7608673918f522e63176fa4de41704ba57328927d0d338cd18c1f70"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.539965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" event={"ID":"a5ddd293-64a8-4907-a786-a4c0b5d57ab9","Type":"ContainerStarted","Data":"ca0c4bba5d377299615c0867f8ffdec1de2effe5e0952a15d37d59fad4722d49"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.539993 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" event={"ID":"a5ddd293-64a8-4907-a786-a4c0b5d57ab9","Type":"ContainerStarted","Data":"fd50994b4f2544be90aaa3686c71f9c81fe1b9e33fddd45b50574c331e102623"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.542140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" event={"ID":"579a9e43-c282-4a46-ab4a-4b4564a8344f","Type":"ContainerStarted","Data":"c2e918fa9478d9ac089fa694b5eb06828b6f21239eb7806fb81b793f54fea278"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.542196 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" event={"ID":"579a9e43-c282-4a46-ab4a-4b4564a8344f","Type":"ContainerStarted","Data":"5d73850ba70e10adf5677f68bdca4465cc044467d1dbca8f8309f803113f26ff"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.543883 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xcpgj" event={"ID":"c1efa337-030b-4790-be14-301fd44a869c","Type":"ContainerStarted","Data":"47a13dc04fde1944f93a39830861b3995285782f68b2b36618323061f7d4f1c8"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.544267 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.545357 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" event={"ID":"7b818454-cee7-4a54-a628-1358afa71ef8","Type":"ContainerStarted","Data":"33acdf6df6a2e54c0e5aafd0ebaedf90d7c81db70c65db46cc99f6de8b2508b1"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.545952 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.547397 4833 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-mn2nx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.547436 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" podUID="7b818454-cee7-4a54-a628-1358afa71ef8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.548366 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" event={"ID":"80bbbc08-a5ce-4139-837b-ad932e055904","Type":"ContainerStarted","Data":"ae9530911ee96abb1f8a0b02e05be41e8772cfd2a89cad8035ba4bc792a350cc"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.550222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" event={"ID":"20979785-edd8-4fca-96b2-3c7eb89cce18","Type":"ContainerStarted","Data":"b43bf39cd17b1715c5ddfd0a2b0869f07adb0ebe151da42fdf10ff418788d0bc"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.552112 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" event={"ID":"cb0ef16c-4892-4013-9fb2-0826a86ee88c","Type":"ContainerStarted","Data":"c4832db921ad41703f1c41f153bcd5a5b9c4fe90be9c916fa6e829bf80f3e374"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.552137 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" event={"ID":"cb0ef16c-4892-4013-9fb2-0826a86ee88c","Type":"ContainerStarted","Data":"22242245cd3592f557c4527cd6273da52552d839e5369a6806c971a1aa859f88"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.553565 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" event={"ID":"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da","Type":"ContainerStarted","Data":"a16855e2089780be9c98db8f1fc32955beecf33f86392c270b499c23cd25e99f"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.553590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" event={"ID":"0fdbd9ee-ce4e-4dce-98ac-670862cdf5da","Type":"ContainerStarted","Data":"17f70b4310db1f324688be51cd00647c6f879a2aa84cd88013db6c6fb4092239"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.555021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n2rfz" event={"ID":"9704f9c7-174a-4eb5-9045-2cc38c248bdc","Type":"ContainerStarted","Data":"de934f3e1f85065e1f7ba77d08f7a51e0c5242d4f748a3db9a32cae14efd0940"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.557055 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" event={"ID":"902663e2-9d1b-47a2-af8b-fcd67c717b70","Type":"ContainerStarted","Data":"97ccb2c62505785dd5d58879edef73f586f6368e2b5c7ea9850b522eac91ab0e"} Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.559668 4833 patch_prober.go:28] interesting pod/console-operator-58897d9998-v59z8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.559703 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-v59z8" podUID="3ed10be6-6292-4eed-abf0-14117bc24266" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.559754 4833 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-pbxl9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.559766 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" podUID="810332c8-987e-485f-9940-d1b61944b1a8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.560546 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-czv2v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.560610 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.561063 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.561089 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.566022 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-f9mp8" podStartSLOduration=141.566008245 podStartE2EDuration="2m21.566008245s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.563814316 +0000 UTC m=+164.215138718" watchObservedRunningTime="2026-01-27 14:14:22.566008245 +0000 UTC m=+164.217332647" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.601422 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.601609 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.101580396 +0000 UTC m=+164.752904798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.601770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.602150 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.102143531 +0000 UTC m=+164.753467933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.604392 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 14:09:21 +0000 UTC, rotation deadline is 2026-12-08 16:52:06.47380075 +0000 UTC Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.604420 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7562h37m43.869383106s for next certificate rotation Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.614890 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-t7dr8" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.623054 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.652945 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-sgpzc" podStartSLOduration=141.652923826 podStartE2EDuration="2m21.652923826s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.606013327 +0000 UTC m=+164.257337739" watchObservedRunningTime="2026-01-27 14:14:22.652923826 +0000 UTC m=+164.304248228" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.653050 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gmnmv" podStartSLOduration=141.65304648 podStartE2EDuration="2m21.65304648s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.652342651 +0000 UTC m=+164.303667053" watchObservedRunningTime="2026-01-27 14:14:22.65304648 +0000 UTC m=+164.304370882" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.686984 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" podStartSLOduration=141.686967586 podStartE2EDuration="2m21.686967586s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.684993231 +0000 UTC m=+164.336317643" watchObservedRunningTime="2026-01-27 14:14:22.686967586 +0000 UTC m=+164.338291988" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.702678 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.704572 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.204522444 +0000 UTC m=+164.855846866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.728644 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" podStartSLOduration=141.728623611 podStartE2EDuration="2m21.728623611s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.727954794 +0000 UTC m=+164.379279196" watchObservedRunningTime="2026-01-27 14:14:22.728623611 +0000 UTC m=+164.379948013" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.800389 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5dsln" podStartSLOduration=141.800347999 podStartE2EDuration="2m21.800347999s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.758357043 +0000 UTC m=+164.409681445" watchObservedRunningTime="2026-01-27 14:14:22.800347999 +0000 UTC m=+164.451672401" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.805133 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.805567 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.30554766 +0000 UTC m=+164.956872142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.853381 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-84mnv" podStartSLOduration=141.853354634 podStartE2EDuration="2m21.853354634s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.802504137 +0000 UTC m=+164.453828569" watchObservedRunningTime="2026-01-27 14:14:22.853354634 +0000 UTC m=+164.504679046" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.907063 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:22 crc kubenswrapper[4833]: E0127 14:14:22.907511 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.407488361 +0000 UTC m=+165.058812763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.911819 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-62vh8" podStartSLOduration=141.911793319 podStartE2EDuration="2m21.911793319s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.910137204 +0000 UTC m=+164.561461626" watchObservedRunningTime="2026-01-27 14:14:22.911793319 +0000 UTC m=+164.563117721" Jan 27 14:14:22 crc kubenswrapper[4833]: I0127 14:14:22.915154 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9rt6c" podStartSLOduration=141.915136971 podStartE2EDuration="2m21.915136971s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.854773173 +0000 UTC m=+164.506097595" watchObservedRunningTime="2026-01-27 14:14:22.915136971 +0000 UTC m=+164.566461383" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.005205 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" podStartSLOduration=142.005183837 podStartE2EDuration="2m22.005183837s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:22.959902441 +0000 UTC m=+164.611226853" watchObservedRunningTime="2026-01-27 14:14:23.005183837 +0000 UTC m=+164.656508239" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.007764 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4qx7h" podStartSLOduration=141.007756417 podStartE2EDuration="2m21.007756417s" podCreationTimestamp="2026-01-27 14:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:23.003892731 +0000 UTC m=+164.655217133" watchObservedRunningTime="2026-01-27 14:14:23.007756417 +0000 UTC m=+164.659080819" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.012197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.012628 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.512614269 +0000 UTC m=+165.163938671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.052172 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bhf48" podStartSLOduration=142.052152798 podStartE2EDuration="2m22.052152798s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:23.050704008 +0000 UTC m=+164.702028400" watchObservedRunningTime="2026-01-27 14:14:23.052152798 +0000 UTC m=+164.703477200" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.113393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.113764 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.613744658 +0000 UTC m=+165.265069070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.144187 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:23 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:23 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:23 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.144673 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.210909 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" podStartSLOduration=142.210883678 podStartE2EDuration="2m22.210883678s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:23.117570902 +0000 UTC m=+164.768895304" watchObservedRunningTime="2026-01-27 14:14:23.210883678 +0000 UTC m=+164.862208080" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.225014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.225459 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.725426725 +0000 UTC m=+165.376751127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.311747 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lgnww" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.325977 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.326237 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.826204204 +0000 UTC m=+165.477528606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.326774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.327196 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.827187361 +0000 UTC m=+165.478511843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.349176 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9xp9d" podStartSLOduration=142.34915292 podStartE2EDuration="2m22.34915292s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:23.344606586 +0000 UTC m=+164.995931008" watchObservedRunningTime="2026-01-27 14:14:23.34915292 +0000 UTC m=+165.000477322" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.351937 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" podStartSLOduration=142.351917565 podStartE2EDuration="2m22.351917565s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:23.209993544 +0000 UTC m=+164.861317946" watchObservedRunningTime="2026-01-27 14:14:23.351917565 +0000 UTC m=+165.003241967" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.428393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.428872 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:23.928853904 +0000 UTC m=+165.580178316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.523302 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xcpgj" podStartSLOduration=8.523280691 podStartE2EDuration="8.523280691s" podCreationTimestamp="2026-01-27 14:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:23.434260482 +0000 UTC m=+165.085584884" watchObservedRunningTime="2026-01-27 14:14:23.523280691 +0000 UTC m=+165.174605093" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.529933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.530342 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.030320513 +0000 UTC m=+165.681644995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.590787 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" event={"ID":"50e0dc7f-981b-4ed6-999a-75ca3b351704","Type":"ContainerStarted","Data":"efb0920a1af571ca3e9b8c7736ab09bc409a1ac1c396a5078730a5c52740d195"} Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.592215 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-czv2v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.592255 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.604803 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-v59z8" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.608630 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fqhh4" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.620672 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mn2nx" Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.631239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.631604 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.131588405 +0000 UTC m=+165.782912807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.733876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.738286 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.238270435 +0000 UTC m=+165.889594837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.835194 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.835307 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.335290453 +0000 UTC m=+165.986614855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.835593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.835881 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.335874458 +0000 UTC m=+165.987198860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:23 crc kubenswrapper[4833]: I0127 14:14:23.937071 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:23 crc kubenswrapper[4833]: E0127 14:14:23.937395 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.437379898 +0000 UTC m=+166.088704290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.038884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.039340 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.539323048 +0000 UTC m=+166.190647450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.139912 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.140134 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.640083647 +0000 UTC m=+166.291408049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.140192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.140601 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.640590562 +0000 UTC m=+166.291915044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.141583 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:24 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:24 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:24 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.141644 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.194841 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.241513 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.241711 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.741681809 +0000 UTC m=+166.393006211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.241832 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.241857 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.242224 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.742216444 +0000 UTC m=+166.393540846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.272955 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/71508df5-3756-4f7d-ba4a-5dc54fa67ba6-metrics-certs\") pod \"network-metrics-daemon-jxvwd\" (UID: \"71508df5-3756-4f7d-ba4a-5dc54fa67ba6\") " pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.343074 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.343294 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.84326121 +0000 UTC m=+166.494585612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.343351 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.343897 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.843880267 +0000 UTC m=+166.495204669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.366691 4833 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-8jwlr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.366756 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" podUID="84fae95f-8d24-4b0b-ab4d-a73565d4b64e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.366759 4833 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-8jwlr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.366824 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" podUID="84fae95f-8d24-4b0b-ab4d-a73565d4b64e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.445069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.445270 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.945237572 +0000 UTC m=+166.596561984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.445414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.445765 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:24.945755107 +0000 UTC m=+166.597079599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.534641 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxvwd" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.557383 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.557612 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.057575797 +0000 UTC m=+166.708900189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.557661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.558009 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.057996579 +0000 UTC m=+166.709320981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.631795 4833 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-8jwlr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.632341 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" podUID="84fae95f-8d24-4b0b-ab4d-a73565d4b64e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.632143 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-czv2v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.632426 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.658561 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.658777 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.158747957 +0000 UTC m=+166.810072359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.659480 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.660272 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.160256698 +0000 UTC m=+166.811581100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.766641 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.766969 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.266948569 +0000 UTC m=+166.918272961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.869192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.869912 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.369895538 +0000 UTC m=+167.021219940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:24 crc kubenswrapper[4833]: I0127 14:14:24.969903 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:24 crc kubenswrapper[4833]: E0127 14:14:24.970151 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.470136832 +0000 UTC m=+167.121461234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.071986 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.072289 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.572276908 +0000 UTC m=+167.223601310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.085002 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jxvwd"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.139102 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:25 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:25 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:25 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.139523 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.173572 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.174016 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.673996374 +0000 UTC m=+167.325320776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.274856 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.275277 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.775261386 +0000 UTC m=+167.426585788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.365082 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c5pcs"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.366271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.369939 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.376455 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.376623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnzpg\" (UniqueName: \"kubernetes.io/projected/226dae94-d6a8-45f8-99e4-ec29189f0bd5-kube-api-access-dnzpg\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.376644 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-catalog-content\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.376681 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-utilities\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.376840 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.876827057 +0000 UTC m=+167.528151459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.385159 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5pcs"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.477694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnzpg\" (UniqueName: \"kubernetes.io/projected/226dae94-d6a8-45f8-99e4-ec29189f0bd5-kube-api-access-dnzpg\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.477737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-catalog-content\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.477774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-utilities\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.477796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.478074 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:25.978062089 +0000 UTC m=+167.629386491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.478828 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-catalog-content\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.479041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-utilities\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.501396 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnzpg\" (UniqueName: \"kubernetes.io/projected/226dae94-d6a8-45f8-99e4-ec29189f0bd5-kube-api-access-dnzpg\") pod \"community-operators-c5pcs\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.544031 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2md2f"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.545184 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.547740 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.554070 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2md2f"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.579242 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.579483 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.079440624 +0000 UTC m=+167.730765026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.579557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-catalog-content\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.579611 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.579668 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-utilities\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.579702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdk4\" (UniqueName: \"kubernetes.io/projected/700f73dc-a4b4-402c-acd7-dd23692ff53a-kube-api-access-kxdk4\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.580054 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.080031061 +0000 UTC m=+167.731355533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.635700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" event={"ID":"71508df5-3756-4f7d-ba4a-5dc54fa67ba6","Type":"ContainerStarted","Data":"e9f7d85c68f56d1b9a9382d8ae9f7648ca04068aa6f7ae847437935937761cc2"} Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.680826 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.681030 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.180995015 +0000 UTC m=+167.832319417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.681083 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdk4\" (UniqueName: \"kubernetes.io/projected/700f73dc-a4b4-402c-acd7-dd23692ff53a-kube-api-access-kxdk4\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.681143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-catalog-content\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.681213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.681268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-utilities\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.681729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-catalog-content\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.681771 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-utilities\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.681916 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.18190078 +0000 UTC m=+167.833225302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.701940 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.707214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdk4\" (UniqueName: \"kubernetes.io/projected/700f73dc-a4b4-402c-acd7-dd23692ff53a-kube-api-access-kxdk4\") pod \"certified-operators-2md2f\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.766939 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p655b"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.768081 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.782184 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.782610 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.782745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-utilities\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.782870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fts44\" (UniqueName: \"kubernetes.io/projected/7d47e601-7e9f-4c23-9fcf-db5356101a66-kube-api-access-fts44\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.783103 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.2830841 +0000 UTC m=+167.934408502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.846805 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p655b"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.860557 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.884116 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-utilities\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.884179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fts44\" (UniqueName: \"kubernetes.io/projected/7d47e601-7e9f-4c23-9fcf-db5356101a66-kube-api-access-fts44\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.884224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.884286 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.884798 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-utilities\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.885398 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.385382941 +0000 UTC m=+168.036707343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.888612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.943290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fts44\" (UniqueName: \"kubernetes.io/projected/7d47e601-7e9f-4c23-9fcf-db5356101a66-kube-api-access-fts44\") pod \"community-operators-p655b\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.946285 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6vp8"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.947227 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.959776 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6vp8"] Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.987899 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.988255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9dk4\" (UniqueName: \"kubernetes.io/projected/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-kube-api-access-q9dk4\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.988327 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-utilities\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:25 crc kubenswrapper[4833]: I0127 14:14:25.988391 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-catalog-content\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:25 crc kubenswrapper[4833]: E0127 14:14:25.988596 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.488571786 +0000 UTC m=+168.139896188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.085543 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.094867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.094917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9dk4\" (UniqueName: \"kubernetes.io/projected/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-kube-api-access-q9dk4\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.094970 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-utilities\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.095011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-catalog-content\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.095869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-catalog-content\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.096098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-utilities\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.096264 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.596250543 +0000 UTC m=+168.247574955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.116061 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.116986 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.123791 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.124096 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.132469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9dk4\" (UniqueName: \"kubernetes.io/projected/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-kube-api-access-q9dk4\") pod \"certified-operators-k6vp8\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.138323 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.141013 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:26 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:26 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:26 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.141063 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.199088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.199315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00dc743e-f960-474a-a605-5ae98fe806a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.199482 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00dc743e-f960-474a-a605-5ae98fe806a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.199655 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.699636043 +0000 UTC m=+168.350960445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.210437 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2md2f"] Jan 27 14:14:26 crc kubenswrapper[4833]: W0127 14:14:26.229604 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod700f73dc_a4b4_402c_acd7_dd23692ff53a.slice/crio-8d25586790bb3e623e52d09bace89ab05c2c827205d9e904abd216aa43696e4d WatchSource:0}: Error finding container 8d25586790bb3e623e52d09bace89ab05c2c827205d9e904abd216aa43696e4d: Status 404 returned error can't find the container with id 8d25586790bb3e623e52d09bace89ab05c2c827205d9e904abd216aa43696e4d Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.280055 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.306256 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.306318 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00dc743e-f960-474a-a605-5ae98fe806a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.306354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00dc743e-f960-474a-a605-5ae98fe806a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.306713 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00dc743e-f960-474a-a605-5ae98fe806a8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.307000 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.806984042 +0000 UTC m=+168.458308444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.335573 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00dc743e-f960-474a-a605-5ae98fe806a8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.363627 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5pcs"] Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.409274 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.410417 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:26.910398464 +0000 UTC m=+168.561722866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.429736 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p655b"] Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.444854 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.513577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.514199 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.014185215 +0000 UTC m=+168.665509617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.614792 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.614983 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.114950594 +0000 UTC m=+168.766274996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.615121 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.615474 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.115463938 +0000 UTC m=+168.766788410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.641703 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p655b" event={"ID":"7d47e601-7e9f-4c23-9fcf-db5356101a66","Type":"ContainerStarted","Data":"473076b222dd33480a8331fbafc77bf785cf70465506bb22f2e6a0b7031e7aac"} Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.646106 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerStarted","Data":"8d25586790bb3e623e52d09bace89ab05c2c827205d9e904abd216aa43696e4d"} Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.647697 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerStarted","Data":"98e0cddadcb7a602e5553cba74b3dc7906ebeb75f362b5eecf9e0980f434db75"} Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.649978 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" event={"ID":"71508df5-3756-4f7d-ba4a-5dc54fa67ba6","Type":"ContainerStarted","Data":"a5491972aaa7ba9e771c4eaf15172850593b7f1644f7847d58f7890895172e8f"} Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.716636 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.716951 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.216935667 +0000 UTC m=+168.868260069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.761134 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:14:26 crc kubenswrapper[4833]: W0127 14:14:26.776407 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod00dc743e_f960_474a_a605_5ae98fe806a8.slice/crio-a58cca26cfe42748f19ee1d0223e5e878f22b909b9c397131dce21c247c9ca6b WatchSource:0}: Error finding container a58cca26cfe42748f19ee1d0223e5e878f22b909b9c397131dce21c247c9ca6b: Status 404 returned error can't find the container with id a58cca26cfe42748f19ee1d0223e5e878f22b909b9c397131dce21c247c9ca6b Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.791247 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6vp8"] Jan 27 14:14:26 crc kubenswrapper[4833]: W0127 14:14:26.816233 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73933cea_fb47_4e10_b0d8_bb9d2f3f882f.slice/crio-2a0c592cb7c7d50a93d4f2bc149c649a7a8a6ea7f33344f391ca9bcf6a2ff8cf WatchSource:0}: Error finding container 2a0c592cb7c7d50a93d4f2bc149c649a7a8a6ea7f33344f391ca9bcf6a2ff8cf: Status 404 returned error can't find the container with id 2a0c592cb7c7d50a93d4f2bc149c649a7a8a6ea7f33344f391ca9bcf6a2ff8cf Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.817771 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.818122 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.318109996 +0000 UTC m=+168.969434398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:26 crc kubenswrapper[4833]: I0127 14:14:26.920618 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:26 crc kubenswrapper[4833]: E0127 14:14:26.921077 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.421059465 +0000 UTC m=+169.072383867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.022039 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.022338 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.522325108 +0000 UTC m=+169.173649510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.123476 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.123758 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.623723573 +0000 UTC m=+169.275047985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.140385 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:27 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:27 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:27 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.140463 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.224990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.225340 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.725328026 +0000 UTC m=+169.376652428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.325619 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.325867 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.825830948 +0000 UTC m=+169.477155410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.326099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.327343 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.827325128 +0000 UTC m=+169.478649640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.369010 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8jwlr" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.370599 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.427141 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.427354 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.927323176 +0000 UTC m=+169.578647588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.427477 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.428757 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:27.928741835 +0000 UTC m=+169.580066347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.440963 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.441021 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.441196 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.441222 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.527986 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.528033 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.528619 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.528844 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.028828506 +0000 UTC m=+169.680152908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.528938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.529186 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.029179125 +0000 UTC m=+169.680503527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.537664 4833 patch_prober.go:28] interesting pod/apiserver-76f77b778f-k2dq7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]log ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]etcd ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/max-in-flight-filter ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 14:14:27 crc kubenswrapper[4833]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-startinformers ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 14:14:27 crc kubenswrapper[4833]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 14:14:27 crc kubenswrapper[4833]: livez check failed Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.537750 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" podUID="a5ddd293-64a8-4907-a786-a4c0b5d57ab9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.550136 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9thj"] Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.551788 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.554416 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.567898 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9thj"] Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.631206 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.631395 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.131365903 +0000 UTC m=+169.782690305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.631557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-utilities\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.631684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.631786 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljsbs\" (UniqueName: \"kubernetes.io/projected/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-kube-api-access-ljsbs\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.631844 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-catalog-content\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.632606 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.132590996 +0000 UTC m=+169.783915398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.660406 4833 generic.go:334] "Generic (PLEG): container finished" podID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerID="e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383" exitCode=0 Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.660882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerDied","Data":"e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.662949 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerID="23fdfeeacabae801d03c4b86aec8f71204168b27363a9adb0b978af5c0c2697b" exitCode=0 Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.663033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p655b" event={"ID":"7d47e601-7e9f-4c23-9fcf-db5356101a66","Type":"ContainerDied","Data":"23fdfeeacabae801d03c4b86aec8f71204168b27363a9adb0b978af5c0c2697b"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.663858 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.666639 4833 generic.go:334] "Generic (PLEG): container finished" podID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerID="99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13" exitCode=0 Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.666728 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerDied","Data":"99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.674426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" event={"ID":"50e0dc7f-981b-4ed6-999a-75ca3b351704","Type":"ContainerStarted","Data":"73e0d3dd4b8b20a4fdd95f610f2c09660b9f2aa64b7a9afef69c0c941c17f18b"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.675688 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00dc743e-f960-474a-a605-5ae98fe806a8","Type":"ContainerStarted","Data":"3d5d655d392c1a14462df063e07fa0ce92ecc87c5ae11756f5dfcbee2d7cf118"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.678693 4833 generic.go:334] "Generic (PLEG): container finished" podID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerID="3a18beb562e0bccc49e66f2d905f2b2fea669204c36b018ee237cdc1dca000a1" exitCode=0 Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.679572 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00dc743e-f960-474a-a605-5ae98fe806a8","Type":"ContainerStarted","Data":"a58cca26cfe42748f19ee1d0223e5e878f22b909b9c397131dce21c247c9ca6b"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.679629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6vp8" event={"ID":"73933cea-fb47-4e10-b0d8-bb9d2f3f882f","Type":"ContainerDied","Data":"3a18beb562e0bccc49e66f2d905f2b2fea669204c36b018ee237cdc1dca000a1"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.679648 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6vp8" event={"ID":"73933cea-fb47-4e10-b0d8-bb9d2f3f882f","Type":"ContainerStarted","Data":"2a0c592cb7c7d50a93d4f2bc149c649a7a8a6ea7f33344f391ca9bcf6a2ff8cf"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.680541 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jxvwd" event={"ID":"71508df5-3756-4f7d-ba4a-5dc54fa67ba6","Type":"ContainerStarted","Data":"09a907ec7750edd88c5c134f5ea6d8fd6e82d132fef5cda3c63fa99b753cc862"} Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.699378 4833 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.727100 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jxvwd" podStartSLOduration=146.727078984 podStartE2EDuration="2m26.727078984s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:27.724255527 +0000 UTC m=+169.375579929" watchObservedRunningTime="2026-01-27 14:14:27.727078984 +0000 UTC m=+169.378403386" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.733152 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.733529 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-utilities\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.733682 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljsbs\" (UniqueName: \"kubernetes.io/projected/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-kube-api-access-ljsbs\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.733730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-catalog-content\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.734636 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.23460603 +0000 UTC m=+169.885930432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.734689 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-catalog-content\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.735033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-utilities\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.777827 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljsbs\" (UniqueName: \"kubernetes.io/projected/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-kube-api-access-ljsbs\") pod \"redhat-marketplace-m9thj\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.835692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.836165 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.336145089 +0000 UTC m=+169.987469561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.869237 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.936380 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:27 crc kubenswrapper[4833]: E0127 14:14:27.937237 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.437219416 +0000 UTC m=+170.088543818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.940864 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x9fn2"] Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.941917 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:27 crc kubenswrapper[4833]: I0127 14:14:27.953754 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9fn2"] Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.039037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-utilities\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.039149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-catalog-content\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.039196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpqr\" (UniqueName: \"kubernetes.io/projected/6f2855d7-f801-45ec-b2fb-245142f74599-kube-api-access-rvpqr\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.040391 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.040751 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.54073756 +0000 UTC m=+170.192061962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.074114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.074201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.080674 4833 patch_prober.go:28] interesting pod/console-f9d7485db-rrz8c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.080738 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-rrz8c" podUID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.136355 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.140081 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:28 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:28 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:28 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.140133 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.140915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.141103 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.641075418 +0000 UTC m=+170.292399820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.141285 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-catalog-content\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.141310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-utilities\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.141377 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvpqr\" (UniqueName: \"kubernetes.io/projected/6f2855d7-f801-45ec-b2fb-245142f74599-kube-api-access-rvpqr\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.141665 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.142040 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.642020844 +0000 UTC m=+170.293345246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.142258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-catalog-content\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.142308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-utilities\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.163011 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9thj"] Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.166552 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvpqr\" (UniqueName: \"kubernetes.io/projected/6f2855d7-f801-45ec-b2fb-245142f74599-kube-api-access-rvpqr\") pod \"redhat-marketplace-x9fn2\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: W0127 14:14:28.168903 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda04358f4_8a2a_4acf_8607_0afc9ffceb9f.slice/crio-30f5f7aefda62043d66c0e28652d470919a75ed49a5633f4314559033207993c WatchSource:0}: Error finding container 30f5f7aefda62043d66c0e28652d470919a75ed49a5633f4314559033207993c: Status 404 returned error can't find the container with id 30f5f7aefda62043d66c0e28652d470919a75ed49a5633f4314559033207993c Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.245146 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.245338 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.745309561 +0000 UTC m=+170.396633963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.245528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.245967 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.745950509 +0000 UTC m=+170.397274911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.281728 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.347124 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.347630 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.847610212 +0000 UTC m=+170.498934614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.410044 4833 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T14:14:27.699410739Z","Handler":null,"Name":""} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.448402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: E0127 14:14:28.448834 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:14:28.948813224 +0000 UTC m=+170.600137676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xzhbs" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.462874 4833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.462914 4833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.549937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.558757 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9g5pw"] Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.559857 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.560792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.565721 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.578483 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9g5pw"] Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.582148 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9fn2"] Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.651581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-utilities\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.652117 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-catalog-content\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.652179 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pr2r\" (UniqueName: \"kubernetes.io/projected/e3c453af-dcdd-449a-b09b-dc6076b3b07a-kube-api-access-8pr2r\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.652236 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.670269 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.670322 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.688500 4833 generic.go:334] "Generic (PLEG): container finished" podID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerID="61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321" exitCode=0 Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.688570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerDied","Data":"61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.688603 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerStarted","Data":"30f5f7aefda62043d66c0e28652d470919a75ed49a5633f4314559033207993c"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.689849 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerStarted","Data":"0e99bba298489556239469e1b3c8eabe189bf654c59ff0d86c2f2d955619b00d"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.695301 4833 generic.go:334] "Generic (PLEG): container finished" podID="902663e2-9d1b-47a2-af8b-fcd67c717b70" containerID="97ccb2c62505785dd5d58879edef73f586f6368e2b5c7ea9850b522eac91ab0e" exitCode=0 Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.695380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" event={"ID":"902663e2-9d1b-47a2-af8b-fcd67c717b70","Type":"ContainerDied","Data":"97ccb2c62505785dd5d58879edef73f586f6368e2b5c7ea9850b522eac91ab0e"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.698173 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" event={"ID":"50e0dc7f-981b-4ed6-999a-75ca3b351704","Type":"ContainerStarted","Data":"6295583a4d8252b9b0de825c8530a88c69d68e3da58bf75c2de5fbbc5e5b6519"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.698224 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" event={"ID":"50e0dc7f-981b-4ed6-999a-75ca3b351704","Type":"ContainerStarted","Data":"58f86f05219208f8af9109006da169236a96f3899863fb28a1bfbf2b5dd2147d"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.699983 4833 generic.go:334] "Generic (PLEG): container finished" podID="00dc743e-f960-474a-a605-5ae98fe806a8" containerID="3d5d655d392c1a14462df063e07fa0ce92ecc87c5ae11756f5dfcbee2d7cf118" exitCode=0 Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.700088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00dc743e-f960-474a-a605-5ae98fe806a8","Type":"ContainerDied","Data":"3d5d655d392c1a14462df063e07fa0ce92ecc87c5ae11756f5dfcbee2d7cf118"} Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.739876 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xzhbs\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.753118 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pr2r\" (UniqueName: \"kubernetes.io/projected/e3c453af-dcdd-449a-b09b-dc6076b3b07a-kube-api-access-8pr2r\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.753315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-utilities\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.753342 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-catalog-content\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.754202 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-catalog-content\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.754737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-utilities\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.782990 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bs9ln" podStartSLOduration=13.782963049 podStartE2EDuration="13.782963049s" podCreationTimestamp="2026-01-27 14:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:28.776532034 +0000 UTC m=+170.427856506" watchObservedRunningTime="2026-01-27 14:14:28.782963049 +0000 UTC m=+170.434287451" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.801209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pr2r\" (UniqueName: \"kubernetes.io/projected/e3c453af-dcdd-449a-b09b-dc6076b3b07a-kube-api-access-8pr2r\") pod \"redhat-operators-9g5pw\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.826037 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.911164 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.940884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.949861 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v4799"] Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.953643 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:28 crc kubenswrapper[4833]: I0127 14:14:28.954815 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4799"] Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.057098 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-catalog-content\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.057581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjffp\" (UniqueName: \"kubernetes.io/projected/0c56799f-d464-46ff-a6f0-23426b7385df-kube-api-access-mjffp\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.057649 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-utilities\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.143746 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:29 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:29 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:29 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.143802 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.158540 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjffp\" (UniqueName: \"kubernetes.io/projected/0c56799f-d464-46ff-a6f0-23426b7385df-kube-api-access-mjffp\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.158583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-utilities\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.158655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-catalog-content\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.159248 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-catalog-content\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.159544 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-utilities\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.161790 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9g5pw"] Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.181028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjffp\" (UniqueName: \"kubernetes.io/projected/0c56799f-d464-46ff-a6f0-23426b7385df-kube-api-access-mjffp\") pod \"redhat-operators-v4799\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.227816 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.272285 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.409196 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzhbs"] Jan 27 14:14:29 crc kubenswrapper[4833]: W0127 14:14:29.446594 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4caae65f_8437_4b6d_ae10_e0ac8625e4b4.slice/crio-f197c327ad9e3a89d6d33b2854103fe5d3f95062d047f69a627582a988ed4a08 WatchSource:0}: Error finding container f197c327ad9e3a89d6d33b2854103fe5d3f95062d047f69a627582a988ed4a08: Status 404 returned error can't find the container with id f197c327ad9e3a89d6d33b2854103fe5d3f95062d047f69a627582a988ed4a08 Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.485602 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4799"] Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.706363 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerStarted","Data":"d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9"} Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.708771 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerStarted","Data":"926c67d74a9a051e833a092e11c1f7af6e6d872c18e72cda096389565e0c18b6"} Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.708804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" event={"ID":"4caae65f-8437-4b6d-ae10-e0ac8625e4b4","Type":"ContainerStarted","Data":"f197c327ad9e3a89d6d33b2854103fe5d3f95062d047f69a627582a988ed4a08"} Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.710100 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f2855d7-f801-45ec-b2fb-245142f74599" containerID="a933bc7ff1c4b9bb7b086fa720c5d2fd28f04d9ae3deb2498361a31cec0a61a5" exitCode=0 Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.710153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerDied","Data":"a933bc7ff1c4b9bb7b086fa720c5d2fd28f04d9ae3deb2498361a31cec0a61a5"} Jan 27 14:14:29 crc kubenswrapper[4833]: I0127 14:14:29.713139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerStarted","Data":"3facb04bfca6291c54b4d393ff4ba5f399030ca209a44447fc8b9fd535156cfe"} Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.118502 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.125205 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.143940 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:30 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:30 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:30 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.143999 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4czl\" (UniqueName: \"kubernetes.io/projected/902663e2-9d1b-47a2-af8b-fcd67c717b70-kube-api-access-p4czl\") pod \"902663e2-9d1b-47a2-af8b-fcd67c717b70\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902663e2-9d1b-47a2-af8b-fcd67c717b70-secret-volume\") pod \"902663e2-9d1b-47a2-af8b-fcd67c717b70\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170412 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902663e2-9d1b-47a2-af8b-fcd67c717b70-config-volume\") pod \"902663e2-9d1b-47a2-af8b-fcd67c717b70\" (UID: \"902663e2-9d1b-47a2-af8b-fcd67c717b70\") " Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170522 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00dc743e-f960-474a-a605-5ae98fe806a8-kubelet-dir\") pod \"00dc743e-f960-474a-a605-5ae98fe806a8\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170566 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00dc743e-f960-474a-a605-5ae98fe806a8-kube-api-access\") pod \"00dc743e-f960-474a-a605-5ae98fe806a8\" (UID: \"00dc743e-f960-474a-a605-5ae98fe806a8\") " Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00dc743e-f960-474a-a605-5ae98fe806a8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "00dc743e-f960-474a-a605-5ae98fe806a8" (UID: "00dc743e-f960-474a-a605-5ae98fe806a8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.170914 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00dc743e-f960-474a-a605-5ae98fe806a8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.171677 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902663e2-9d1b-47a2-af8b-fcd67c717b70-config-volume" (OuterVolumeSpecName: "config-volume") pod "902663e2-9d1b-47a2-af8b-fcd67c717b70" (UID: "902663e2-9d1b-47a2-af8b-fcd67c717b70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.180847 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00dc743e-f960-474a-a605-5ae98fe806a8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00dc743e-f960-474a-a605-5ae98fe806a8" (UID: "00dc743e-f960-474a-a605-5ae98fe806a8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.181357 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902663e2-9d1b-47a2-af8b-fcd67c717b70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "902663e2-9d1b-47a2-af8b-fcd67c717b70" (UID: "902663e2-9d1b-47a2-af8b-fcd67c717b70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.181776 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902663e2-9d1b-47a2-af8b-fcd67c717b70-kube-api-access-p4czl" (OuterVolumeSpecName: "kube-api-access-p4czl") pod "902663e2-9d1b-47a2-af8b-fcd67c717b70" (UID: "902663e2-9d1b-47a2-af8b-fcd67c717b70"). InnerVolumeSpecName "kube-api-access-p4czl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.272369 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902663e2-9d1b-47a2-af8b-fcd67c717b70-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.272408 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00dc743e-f960-474a-a605-5ae98fe806a8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.272421 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4czl\" (UniqueName: \"kubernetes.io/projected/902663e2-9d1b-47a2-af8b-fcd67c717b70-kube-api-access-p4czl\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.272434 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902663e2-9d1b-47a2-af8b-fcd67c717b70-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.644144 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xcpgj" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.705124 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:14:30 crc kubenswrapper[4833]: E0127 14:14:30.706260 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902663e2-9d1b-47a2-af8b-fcd67c717b70" containerName="collect-profiles" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.706370 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="902663e2-9d1b-47a2-af8b-fcd67c717b70" containerName="collect-profiles" Jan 27 14:14:30 crc kubenswrapper[4833]: E0127 14:14:30.706520 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00dc743e-f960-474a-a605-5ae98fe806a8" containerName="pruner" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.706603 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dc743e-f960-474a-a605-5ae98fe806a8" containerName="pruner" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.706801 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="902663e2-9d1b-47a2-af8b-fcd67c717b70" containerName="collect-profiles" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.706893 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="00dc743e-f960-474a-a605-5ae98fe806a8" containerName="pruner" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.707469 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.712054 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.717697 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.726366 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.772069 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.772735 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"00dc743e-f960-474a-a605-5ae98fe806a8","Type":"ContainerDied","Data":"a58cca26cfe42748f19ee1d0223e5e878f22b909b9c397131dce21c247c9ca6b"} Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.772784 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58cca26cfe42748f19ee1d0223e5e878f22b909b9c397131dce21c247c9ca6b" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.776976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerStarted","Data":"a989d4193b3358c4993ab9c8e0f7253593dc46c2dc18bf1e60537a1c7b87d728"} Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.780219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" event={"ID":"902663e2-9d1b-47a2-af8b-fcd67c717b70","Type":"ContainerDied","Data":"006a7de63d015f3496bacee73e237e15e4f73662ce598eee47ddc1ea9ed1afb0"} Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.780300 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006a7de63d015f3496bacee73e237e15e4f73662ce598eee47ddc1ea9ed1afb0" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.780438 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.781793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b8f769-5c2b-44f1-a324-70b2582d9c94-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.781835 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b8f769-5c2b-44f1-a324-70b2582d9c94-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.783787 4833 generic.go:334] "Generic (PLEG): container finished" podID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerID="d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9" exitCode=0 Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.784175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerDied","Data":"d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9"} Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.786863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" event={"ID":"4caae65f-8437-4b6d-ae10-e0ac8625e4b4","Type":"ContainerStarted","Data":"54bd27266da270b60190249fb963df2ecc7e9e5f7eb923180cc16652a496b01b"} Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.883210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b8f769-5c2b-44f1-a324-70b2582d9c94-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.884109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b8f769-5c2b-44f1-a324-70b2582d9c94-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.884173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b8f769-5c2b-44f1-a324-70b2582d9c94-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:30 crc kubenswrapper[4833]: I0127 14:14:30.903123 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b8f769-5c2b-44f1-a324-70b2582d9c94-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.043756 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.141566 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:31 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:31 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:31 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.141633 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.479845 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.799505 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"34b8f769-5c2b-44f1-a324-70b2582d9c94","Type":"ContainerStarted","Data":"c12d2ca3cb79abf4b9023cfefecafc71cba1f99f8415d4c7e5df7aaa318f0db0"} Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.802186 4833 generic.go:334] "Generic (PLEG): container finished" podID="0c56799f-d464-46ff-a6f0-23426b7385df" containerID="a989d4193b3358c4993ab9c8e0f7253593dc46c2dc18bf1e60537a1c7b87d728" exitCode=0 Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.802248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerDied","Data":"a989d4193b3358c4993ab9c8e0f7253593dc46c2dc18bf1e60537a1c7b87d728"} Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.802407 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:31 crc kubenswrapper[4833]: I0127 14:14:31.852173 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" podStartSLOduration=150.852145739 podStartE2EDuration="2m30.852145739s" podCreationTimestamp="2026-01-27 14:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:31.849065515 +0000 UTC m=+173.500389917" watchObservedRunningTime="2026-01-27 14:14:31.852145739 +0000 UTC m=+173.503470151" Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.147689 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:32 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:32 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:32 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.147765 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.260511 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.260579 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.532016 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.536598 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-k2dq7" Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.817946 4833 generic.go:334] "Generic (PLEG): container finished" podID="34b8f769-5c2b-44f1-a324-70b2582d9c94" containerID="89246e168c5a1b89e2c110d7d5df45f516c4b8874b47af48f17b479ce37d1c6b" exitCode=0 Jan 27 14:14:32 crc kubenswrapper[4833]: I0127 14:14:32.818042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"34b8f769-5c2b-44f1-a324-70b2582d9c94","Type":"ContainerDied","Data":"89246e168c5a1b89e2c110d7d5df45f516c4b8874b47af48f17b479ce37d1c6b"} Jan 27 14:14:33 crc kubenswrapper[4833]: I0127 14:14:33.168064 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:33 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:33 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:33 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:33 crc kubenswrapper[4833]: I0127 14:14:33.168699 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:34 crc kubenswrapper[4833]: I0127 14:14:34.139430 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:34 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:34 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:34 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:34 crc kubenswrapper[4833]: I0127 14:14:34.139511 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:35 crc kubenswrapper[4833]: I0127 14:14:35.138792 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:35 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:35 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:35 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:35 crc kubenswrapper[4833]: I0127 14:14:35.139327 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:36 crc kubenswrapper[4833]: I0127 14:14:36.139102 4833 patch_prober.go:28] interesting pod/router-default-5444994796-p25n6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:14:36 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Jan 27 14:14:36 crc kubenswrapper[4833]: [+]process-running ok Jan 27 14:14:36 crc kubenswrapper[4833]: healthz check failed Jan 27 14:14:36 crc kubenswrapper[4833]: I0127 14:14:36.139166 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p25n6" podUID="9a6599b1-062c-49b9-96fd-c6ddf5464938" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.141281 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.145659 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-p25n6" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.439326 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.439856 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.439401 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-ghfvn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.440072 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ghfvn" podUID="3005a862-7d67-4c98-9706-fca9dfc75ba0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.28:8080/\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.726951 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.855337 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"34b8f769-5c2b-44f1-a324-70b2582d9c94","Type":"ContainerDied","Data":"c12d2ca3cb79abf4b9023cfefecafc71cba1f99f8415d4c7e5df7aaa318f0db0"} Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.855390 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.855390 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12d2ca3cb79abf4b9023cfefecafc71cba1f99f8415d4c7e5df7aaa318f0db0" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.905091 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b8f769-5c2b-44f1-a324-70b2582d9c94-kube-api-access\") pod \"34b8f769-5c2b-44f1-a324-70b2582d9c94\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.905282 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b8f769-5c2b-44f1-a324-70b2582d9c94-kubelet-dir\") pod \"34b8f769-5c2b-44f1-a324-70b2582d9c94\" (UID: \"34b8f769-5c2b-44f1-a324-70b2582d9c94\") " Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.905386 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34b8f769-5c2b-44f1-a324-70b2582d9c94-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "34b8f769-5c2b-44f1-a324-70b2582d9c94" (UID: "34b8f769-5c2b-44f1-a324-70b2582d9c94"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.905747 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34b8f769-5c2b-44f1-a324-70b2582d9c94-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:37 crc kubenswrapper[4833]: I0127 14:14:37.917687 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b8f769-5c2b-44f1-a324-70b2582d9c94-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "34b8f769-5c2b-44f1-a324-70b2582d9c94" (UID: "34b8f769-5c2b-44f1-a324-70b2582d9c94"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:14:38 crc kubenswrapper[4833]: I0127 14:14:38.007354 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34b8f769-5c2b-44f1-a324-70b2582d9c94-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:38 crc kubenswrapper[4833]: I0127 14:14:38.097569 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:38 crc kubenswrapper[4833]: I0127 14:14:38.101116 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:14:44 crc kubenswrapper[4833]: I0127 14:14:44.494410 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qs4fq"] Jan 27 14:14:44 crc kubenswrapper[4833]: I0127 14:14:44.495399 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" containerID="cri-o://8b46dbbc96dd54a56613220cc713ded7ed463b680f16630d43fe9ee83ea93124" gracePeriod=30 Jan 27 14:14:44 crc kubenswrapper[4833]: I0127 14:14:44.516415 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p"] Jan 27 14:14:44 crc kubenswrapper[4833]: I0127 14:14:44.516656 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" containerID="cri-o://491ba252d1e2837611afbd4387c77dffb21164fce5ca6a8473c6b908fb1ea55a" gracePeriod=30 Jan 27 14:14:44 crc kubenswrapper[4833]: I0127 14:14:44.892993 4833 generic.go:334] "Generic (PLEG): container finished" podID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerID="8b46dbbc96dd54a56613220cc713ded7ed463b680f16630d43fe9ee83ea93124" exitCode=0 Jan 27 14:14:44 crc kubenswrapper[4833]: I0127 14:14:44.893046 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" event={"ID":"3420db2b-d99f-4c35-9423-1c2db40ac8da","Type":"ContainerDied","Data":"8b46dbbc96dd54a56613220cc713ded7ed463b680f16630d43fe9ee83ea93124"} Jan 27 14:14:45 crc kubenswrapper[4833]: I0127 14:14:45.901148 4833 generic.go:334] "Generic (PLEG): container finished" podID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerID="491ba252d1e2837611afbd4387c77dffb21164fce5ca6a8473c6b908fb1ea55a" exitCode=0 Jan 27 14:14:45 crc kubenswrapper[4833]: I0127 14:14:45.901211 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" event={"ID":"6e9f3d52-0137-4dc9-9306-970401a0f7af","Type":"ContainerDied","Data":"491ba252d1e2837611afbd4387c77dffb21164fce5ca6a8473c6b908fb1ea55a"} Jan 27 14:14:47 crc kubenswrapper[4833]: I0127 14:14:47.260033 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qs4fq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:14:47 crc kubenswrapper[4833]: I0127 14:14:47.260097 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:14:47 crc kubenswrapper[4833]: I0127 14:14:47.367088 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zrj2p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 14:14:47 crc kubenswrapper[4833]: I0127 14:14:47.367170 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 14:14:47 crc kubenswrapper[4833]: I0127 14:14:47.451981 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ghfvn" Jan 27 14:14:48 crc kubenswrapper[4833]: I0127 14:14:48.948797 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:14:49 crc kubenswrapper[4833]: E0127 14:14:49.628218 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 14:14:49 crc kubenswrapper[4833]: E0127 14:14:49.628421 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9dk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k6vp8_openshift-marketplace(73933cea-fb47-4e10-b0d8-bb9d2f3f882f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:14:49 crc kubenswrapper[4833]: E0127 14:14:49.629605 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k6vp8" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" Jan 27 14:14:50 crc kubenswrapper[4833]: E0127 14:14:50.706501 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 14:14:50 crc kubenswrapper[4833]: E0127 14:14:50.707166 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnzpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c5pcs_openshift-marketplace(226dae94-d6a8-45f8-99e4-ec29189f0bd5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:14:50 crc kubenswrapper[4833]: E0127 14:14:50.709731 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c5pcs" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" Jan 27 14:14:50 crc kubenswrapper[4833]: E0127 14:14:50.798154 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 14:14:50 crc kubenswrapper[4833]: E0127 14:14:50.798517 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fts44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-p655b_openshift-marketplace(7d47e601-7e9f-4c23-9fcf-db5356101a66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:14:50 crc kubenswrapper[4833]: E0127 14:14:50.800132 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-p655b" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" Jan 27 14:14:52 crc kubenswrapper[4833]: E0127 14:14:52.491889 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pcs" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" Jan 27 14:14:52 crc kubenswrapper[4833]: E0127 14:14:52.492521 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-p655b" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" Jan 27 14:14:53 crc kubenswrapper[4833]: E0127 14:14:53.395492 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 14:14:53 crc kubenswrapper[4833]: E0127 14:14:53.395682 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljsbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m9thj_openshift-marketplace(a04358f4-8a2a-4acf-8607-0afc9ffceb9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:14:53 crc kubenswrapper[4833]: E0127 14:14:53.396971 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m9thj" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" Jan 27 14:14:57 crc kubenswrapper[4833]: E0127 14:14:57.925582 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m9thj" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" Jan 27 14:14:57 crc kubenswrapper[4833]: I0127 14:14:57.984469 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" event={"ID":"3420db2b-d99f-4c35-9423-1c2db40ac8da","Type":"ContainerDied","Data":"157b489422808d2b6c6375d585200d4bc3b4c0f73c85b2d05ad58cf2db0b3603"} Jan 27 14:14:57 crc kubenswrapper[4833]: I0127 14:14:57.984538 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157b489422808d2b6c6375d585200d4bc3b4c0f73c85b2d05ad58cf2db0b3603" Jan 27 14:14:57 crc kubenswrapper[4833]: I0127 14:14:57.986967 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" event={"ID":"6e9f3d52-0137-4dc9-9306-970401a0f7af","Type":"ContainerDied","Data":"96a074cdde2897134952c716a1b41e3f6c962c9baeb89368c277ad8b0aa4642e"} Jan 27 14:14:57 crc kubenswrapper[4833]: I0127 14:14:57.987014 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96a074cdde2897134952c716a1b41e3f6c962c9baeb89368c277ad8b0aa4642e" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.000912 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.007734 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.033734 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-4s29x"] Jan 27 14:14:58 crc kubenswrapper[4833]: E0127 14:14:58.033995 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034007 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" Jan 27 14:14:58 crc kubenswrapper[4833]: E0127 14:14:58.034021 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b8f769-5c2b-44f1-a324-70b2582d9c94" containerName="pruner" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034028 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b8f769-5c2b-44f1-a324-70b2582d9c94" containerName="pruner" Jan 27 14:14:58 crc kubenswrapper[4833]: E0127 14:14:58.034039 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034048 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034180 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034190 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034201 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b8f769-5c2b-44f1-a324-70b2582d9c94" containerName="pruner" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.034665 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.054428 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-4s29x"] Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.096245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-config\") pod \"3420db2b-d99f-4c35-9423-1c2db40ac8da\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.096377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-client-ca\") pod \"3420db2b-d99f-4c35-9423-1c2db40ac8da\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.096457 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbt24\" (UniqueName: \"kubernetes.io/projected/3420db2b-d99f-4c35-9423-1c2db40ac8da-kube-api-access-xbt24\") pod \"3420db2b-d99f-4c35-9423-1c2db40ac8da\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.096532 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3420db2b-d99f-4c35-9423-1c2db40ac8da-serving-cert\") pod \"3420db2b-d99f-4c35-9423-1c2db40ac8da\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.096630 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-proxy-ca-bundles\") pod \"3420db2b-d99f-4c35-9423-1c2db40ac8da\" (UID: \"3420db2b-d99f-4c35-9423-1c2db40ac8da\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.097413 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3420db2b-d99f-4c35-9423-1c2db40ac8da" (UID: "3420db2b-d99f-4c35-9423-1c2db40ac8da"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.097497 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-client-ca" (OuterVolumeSpecName: "client-ca") pod "3420db2b-d99f-4c35-9423-1c2db40ac8da" (UID: "3420db2b-d99f-4c35-9423-1c2db40ac8da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.097627 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-config" (OuterVolumeSpecName: "config") pod "3420db2b-d99f-4c35-9423-1c2db40ac8da" (UID: "3420db2b-d99f-4c35-9423-1c2db40ac8da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.103509 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420db2b-d99f-4c35-9423-1c2db40ac8da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3420db2b-d99f-4c35-9423-1c2db40ac8da" (UID: "3420db2b-d99f-4c35-9423-1c2db40ac8da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.103748 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3420db2b-d99f-4c35-9423-1c2db40ac8da-kube-api-access-xbt24" (OuterVolumeSpecName: "kube-api-access-xbt24") pod "3420db2b-d99f-4c35-9423-1c2db40ac8da" (UID: "3420db2b-d99f-4c35-9423-1c2db40ac8da"). InnerVolumeSpecName "kube-api-access-xbt24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.178341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q7b7m" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212065 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-client-ca\") pod \"6e9f3d52-0137-4dc9-9306-970401a0f7af\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212168 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvzw4\" (UniqueName: \"kubernetes.io/projected/6e9f3d52-0137-4dc9-9306-970401a0f7af-kube-api-access-hvzw4\") pod \"6e9f3d52-0137-4dc9-9306-970401a0f7af\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212235 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f3d52-0137-4dc9-9306-970401a0f7af-serving-cert\") pod \"6e9f3d52-0137-4dc9-9306-970401a0f7af\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212257 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-config\") pod \"6e9f3d52-0137-4dc9-9306-970401a0f7af\" (UID: \"6e9f3d52-0137-4dc9-9306-970401a0f7af\") " Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-config\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-proxy-ca-bundles\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212512 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-serving-cert\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212596 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-client-ca\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dbs9\" (UniqueName: \"kubernetes.io/projected/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-kube-api-access-7dbs9\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212682 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212697 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212708 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3420db2b-d99f-4c35-9423-1c2db40ac8da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212719 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbt24\" (UniqueName: \"kubernetes.io/projected/3420db2b-d99f-4c35-9423-1c2db40ac8da-kube-api-access-xbt24\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.212732 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3420db2b-d99f-4c35-9423-1c2db40ac8da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.213741 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e9f3d52-0137-4dc9-9306-970401a0f7af" (UID: "6e9f3d52-0137-4dc9-9306-970401a0f7af"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.214187 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-config" (OuterVolumeSpecName: "config") pod "6e9f3d52-0137-4dc9-9306-970401a0f7af" (UID: "6e9f3d52-0137-4dc9-9306-970401a0f7af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.216955 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9f3d52-0137-4dc9-9306-970401a0f7af-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e9f3d52-0137-4dc9-9306-970401a0f7af" (UID: "6e9f3d52-0137-4dc9-9306-970401a0f7af"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.217131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9f3d52-0137-4dc9-9306-970401a0f7af-kube-api-access-hvzw4" (OuterVolumeSpecName: "kube-api-access-hvzw4") pod "6e9f3d52-0137-4dc9-9306-970401a0f7af" (UID: "6e9f3d52-0137-4dc9-9306-970401a0f7af"). InnerVolumeSpecName "kube-api-access-hvzw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.259898 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qs4fq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.260023 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-config\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313419 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-proxy-ca-bundles\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-serving-cert\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-client-ca\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313544 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dbs9\" (UniqueName: \"kubernetes.io/projected/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-kube-api-access-7dbs9\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313612 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313625 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvzw4\" (UniqueName: \"kubernetes.io/projected/6e9f3d52-0137-4dc9-9306-970401a0f7af-kube-api-access-hvzw4\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313638 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f3d52-0137-4dc9-9306-970401a0f7af-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.313650 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9f3d52-0137-4dc9-9306-970401a0f7af-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.314592 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-client-ca\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.315172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-config\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.316098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-proxy-ca-bundles\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.328119 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-serving-cert\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.331754 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dbs9\" (UniqueName: \"kubernetes.io/projected/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-kube-api-access-7dbs9\") pod \"controller-manager-77c7fb56bb-4s29x\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.365307 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.366191 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zrj2p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.366253 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.992055 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p" Jan 27 14:14:58 crc kubenswrapper[4833]: I0127 14:14:58.992079 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qs4fq" Jan 27 14:14:59 crc kubenswrapper[4833]: I0127 14:14:59.025681 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p"] Jan 27 14:14:59 crc kubenswrapper[4833]: I0127 14:14:59.033645 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zrj2p"] Jan 27 14:14:59 crc kubenswrapper[4833]: I0127 14:14:59.036604 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qs4fq"] Jan 27 14:14:59 crc kubenswrapper[4833]: I0127 14:14:59.039032 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qs4fq"] Jan 27 14:14:59 crc kubenswrapper[4833]: I0127 14:14:59.217192 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3420db2b-d99f-4c35-9423-1c2db40ac8da" path="/var/lib/kubelet/pods/3420db2b-d99f-4c35-9423-1c2db40ac8da/volumes" Jan 27 14:14:59 crc kubenswrapper[4833]: I0127 14:14:59.217740 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9f3d52-0137-4dc9-9306-970401a0f7af" path="/var/lib/kubelet/pods/6e9f3d52-0137-4dc9-9306-970401a0f7af/volumes" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.138714 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd"] Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.139532 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.141926 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.142117 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.149090 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd"] Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.149707 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c1fb855-4475-4e85-891a-6fb0e60b1666-secret-volume\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.149827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c1fb855-4475-4e85-891a-6fb0e60b1666-config-volume\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.149858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwcxm\" (UniqueName: \"kubernetes.io/projected/3c1fb855-4475-4e85-891a-6fb0e60b1666-kube-api-access-nwcxm\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.251162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c1fb855-4475-4e85-891a-6fb0e60b1666-secret-volume\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.251283 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c1fb855-4475-4e85-891a-6fb0e60b1666-config-volume\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.251314 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwcxm\" (UniqueName: \"kubernetes.io/projected/3c1fb855-4475-4e85-891a-6fb0e60b1666-kube-api-access-nwcxm\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.252391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c1fb855-4475-4e85-891a-6fb0e60b1666-config-volume\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.268674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c1fb855-4475-4e85-891a-6fb0e60b1666-secret-volume\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.272021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwcxm\" (UniqueName: \"kubernetes.io/projected/3c1fb855-4475-4e85-891a-6fb0e60b1666-kube-api-access-nwcxm\") pod \"collect-profiles-29492055-jhhfd\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.463076 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.507496 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj"] Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.508304 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.514658 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.514918 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.515053 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.515334 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.515677 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.515711 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.522748 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj"] Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.555912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/702d5ec8-6425-4a29-a344-40908ec1d15e-serving-cert\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.556071 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb78k\" (UniqueName: \"kubernetes.io/projected/702d5ec8-6425-4a29-a344-40908ec1d15e-kube-api-access-zb78k\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.556163 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-client-ca\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.556189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-config\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.657410 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb78k\" (UniqueName: \"kubernetes.io/projected/702d5ec8-6425-4a29-a344-40908ec1d15e-kube-api-access-zb78k\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.657483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-client-ca\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.657509 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-config\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.657561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/702d5ec8-6425-4a29-a344-40908ec1d15e-serving-cert\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.658835 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-client-ca\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.661250 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/702d5ec8-6425-4a29-a344-40908ec1d15e-serving-cert\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.661559 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-config\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.673054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb78k\" (UniqueName: \"kubernetes.io/projected/702d5ec8-6425-4a29-a344-40908ec1d15e-kube-api-access-zb78k\") pod \"route-controller-manager-5f86b664bc-5c6mj\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:00 crc kubenswrapper[4833]: I0127 14:15:00.832860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:01 crc kubenswrapper[4833]: E0127 14:15:01.584209 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 14:15:01 crc kubenswrapper[4833]: E0127 14:15:01.584882 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pr2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9g5pw_openshift-marketplace(e3c453af-dcdd-449a-b09b-dc6076b3b07a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:15:01 crc kubenswrapper[4833]: E0127 14:15:01.586635 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9g5pw" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.008757 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerStarted","Data":"2aecb4f63c997cdd1e4e6bee335232bf1bf81d629e3ca1cec466b4ea9df3e677"} Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.010717 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerStarted","Data":"0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635"} Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.018559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerStarted","Data":"3ef033a939dd09561ac0cb963e9f4d761687b38bbffd59c21b17426405db36f0"} Jan 27 14:15:02 crc kubenswrapper[4833]: E0127 14:15:02.020038 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9g5pw" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.132311 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-4s29x"] Jan 27 14:15:02 crc kubenswrapper[4833]: W0127 14:15:02.136889 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b9ee1d_4ff3_4f04_aff4_0d4c1386c785.slice/crio-a2b3b3a8485ba13552bd38046e77d4bbafa6e565308f69953178ba6b5ede17c0 WatchSource:0}: Error finding container a2b3b3a8485ba13552bd38046e77d4bbafa6e565308f69953178ba6b5ede17c0: Status 404 returned error can't find the container with id a2b3b3a8485ba13552bd38046e77d4bbafa6e565308f69953178ba6b5ede17c0 Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.228230 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd"] Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.235755 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj"] Jan 27 14:15:02 crc kubenswrapper[4833]: W0127 14:15:02.243682 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c1fb855_4475_4e85_891a_6fb0e60b1666.slice/crio-73526d897bf05bd3a9747cdb8efd305c03d5f353eebf8429fabe842e613a9913 WatchSource:0}: Error finding container 73526d897bf05bd3a9747cdb8efd305c03d5f353eebf8429fabe842e613a9913: Status 404 returned error can't find the container with id 73526d897bf05bd3a9747cdb8efd305c03d5f353eebf8429fabe842e613a9913 Jan 27 14:15:02 crc kubenswrapper[4833]: W0127 14:15:02.243980 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702d5ec8_6425_4a29_a344_40908ec1d15e.slice/crio-33aa80d35fb3d4664731388317f9426afc1ff4f1c93718711e0c20dcf320b4ae WatchSource:0}: Error finding container 33aa80d35fb3d4664731388317f9426afc1ff4f1c93718711e0c20dcf320b4ae: Status 404 returned error can't find the container with id 33aa80d35fb3d4664731388317f9426afc1ff4f1c93718711e0c20dcf320b4ae Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.261197 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:15:02 crc kubenswrapper[4833]: I0127 14:15:02.261289 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.024551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" event={"ID":"702d5ec8-6425-4a29-a344-40908ec1d15e","Type":"ContainerStarted","Data":"dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.025027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" event={"ID":"702d5ec8-6425-4a29-a344-40908ec1d15e","Type":"ContainerStarted","Data":"33aa80d35fb3d4664731388317f9426afc1ff4f1c93718711e0c20dcf320b4ae"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.026277 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.029424 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f2855d7-f801-45ec-b2fb-245142f74599" containerID="3ef033a939dd09561ac0cb963e9f4d761687b38bbffd59c21b17426405db36f0" exitCode=0 Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.029494 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerDied","Data":"3ef033a939dd09561ac0cb963e9f4d761687b38bbffd59c21b17426405db36f0"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.031865 4833 generic.go:334] "Generic (PLEG): container finished" podID="0c56799f-d464-46ff-a6f0-23426b7385df" containerID="2aecb4f63c997cdd1e4e6bee335232bf1bf81d629e3ca1cec466b4ea9df3e677" exitCode=0 Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.031959 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerDied","Data":"2aecb4f63c997cdd1e4e6bee335232bf1bf81d629e3ca1cec466b4ea9df3e677"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.032074 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.034034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" event={"ID":"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785","Type":"ContainerStarted","Data":"693b1ed7306c814b1508c924a2d31127584c0ff530ef8da332d46a83dfc8b098"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.034063 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" event={"ID":"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785","Type":"ContainerStarted","Data":"a2b3b3a8485ba13552bd38046e77d4bbafa6e565308f69953178ba6b5ede17c0"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.034699 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.037700 4833 generic.go:334] "Generic (PLEG): container finished" podID="3c1fb855-4475-4e85-891a-6fb0e60b1666" containerID="2371cf77a3863b0cf1a41af23ac07dc155201e9d269db7008fb1bdb5f9731388" exitCode=0 Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.037779 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" event={"ID":"3c1fb855-4475-4e85-891a-6fb0e60b1666","Type":"ContainerDied","Data":"2371cf77a3863b0cf1a41af23ac07dc155201e9d269db7008fb1bdb5f9731388"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.037804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" event={"ID":"3c1fb855-4475-4e85-891a-6fb0e60b1666","Type":"ContainerStarted","Data":"73526d897bf05bd3a9747cdb8efd305c03d5f353eebf8429fabe842e613a9913"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.040104 4833 generic.go:334] "Generic (PLEG): container finished" podID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerID="0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635" exitCode=0 Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.040133 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerDied","Data":"0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635"} Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.041110 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.052798 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" podStartSLOduration=19.052761173 podStartE2EDuration="19.052761173s" podCreationTimestamp="2026-01-27 14:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:03.049204095 +0000 UTC m=+204.700528507" watchObservedRunningTime="2026-01-27 14:15:03.052761173 +0000 UTC m=+204.704085575" Jan 27 14:15:03 crc kubenswrapper[4833]: I0127 14:15:03.108950 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" podStartSLOduration=19.108927846 podStartE2EDuration="19.108927846s" podCreationTimestamp="2026-01-27 14:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:03.107635587 +0000 UTC m=+204.758960019" watchObservedRunningTime="2026-01-27 14:15:03.108927846 +0000 UTC m=+204.760252258" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.342429 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.364515 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-4s29x"] Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.416871 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c1fb855-4475-4e85-891a-6fb0e60b1666-secret-volume\") pod \"3c1fb855-4475-4e85-891a-6fb0e60b1666\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.417420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c1fb855-4475-4e85-891a-6fb0e60b1666-config-volume\") pod \"3c1fb855-4475-4e85-891a-6fb0e60b1666\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.417467 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwcxm\" (UniqueName: \"kubernetes.io/projected/3c1fb855-4475-4e85-891a-6fb0e60b1666-kube-api-access-nwcxm\") pod \"3c1fb855-4475-4e85-891a-6fb0e60b1666\" (UID: \"3c1fb855-4475-4e85-891a-6fb0e60b1666\") " Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.418754 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c1fb855-4475-4e85-891a-6fb0e60b1666-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c1fb855-4475-4e85-891a-6fb0e60b1666" (UID: "3c1fb855-4475-4e85-891a-6fb0e60b1666"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.423930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1fb855-4475-4e85-891a-6fb0e60b1666-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c1fb855-4475-4e85-891a-6fb0e60b1666" (UID: "3c1fb855-4475-4e85-891a-6fb0e60b1666"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.428010 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1fb855-4475-4e85-891a-6fb0e60b1666-kube-api-access-nwcxm" (OuterVolumeSpecName: "kube-api-access-nwcxm") pod "3c1fb855-4475-4e85-891a-6fb0e60b1666" (UID: "3c1fb855-4475-4e85-891a-6fb0e60b1666"). InnerVolumeSpecName "kube-api-access-nwcxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.456806 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj"] Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.518230 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c1fb855-4475-4e85-891a-6fb0e60b1666-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.518265 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwcxm\" (UniqueName: \"kubernetes.io/projected/3c1fb855-4475-4e85-891a-6fb0e60b1666-kube-api-access-nwcxm\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:04 crc kubenswrapper[4833]: I0127 14:15:04.518277 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c1fb855-4475-4e85-891a-6fb0e60b1666-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.055614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerStarted","Data":"a015f486febcd7c0eb1821d04217591f17e5baac65c1bb1573d10494204d7556"} Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.057837 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerStarted","Data":"bf9784597716f8c1319adf62af65ea6eb203a104277d0908888257495c211d57"} Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.060322 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.060887 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd" event={"ID":"3c1fb855-4475-4e85-891a-6fb0e60b1666","Type":"ContainerDied","Data":"73526d897bf05bd3a9747cdb8efd305c03d5f353eebf8429fabe842e613a9913"} Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.060927 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73526d897bf05bd3a9747cdb8efd305c03d5f353eebf8429fabe842e613a9913" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.063066 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerStarted","Data":"7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06"} Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.065324 4833 generic.go:334] "Generic (PLEG): container finished" podID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerID="fd1d3c72522b509991394a1b83b387dfcddf7a713386e8beb7b9ae1a2d3ac917" exitCode=0 Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.065717 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6vp8" event={"ID":"73933cea-fb47-4e10-b0d8-bb9d2f3f882f","Type":"ContainerDied","Data":"fd1d3c72522b509991394a1b83b387dfcddf7a713386e8beb7b9ae1a2d3ac917"} Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.093684 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x9fn2" podStartSLOduration=3.892005043 podStartE2EDuration="38.093663667s" podCreationTimestamp="2026-01-27 14:14:27 +0000 UTC" firstStartedPulling="2026-01-27 14:14:29.711737787 +0000 UTC m=+171.363062189" lastFinishedPulling="2026-01-27 14:15:03.913396411 +0000 UTC m=+205.564720813" observedRunningTime="2026-01-27 14:15:05.090859021 +0000 UTC m=+206.742183423" watchObservedRunningTime="2026-01-27 14:15:05.093663667 +0000 UTC m=+206.744988069" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.108519 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:15:05 crc kubenswrapper[4833]: E0127 14:15:05.109059 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1fb855-4475-4e85-891a-6fb0e60b1666" containerName="collect-profiles" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.109194 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1fb855-4475-4e85-891a-6fb0e60b1666" containerName="collect-profiles" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.109375 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1fb855-4475-4e85-891a-6fb0e60b1666" containerName="collect-profiles" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.109890 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.111756 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.112324 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.126780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea405ab-2d04-4b76-859f-8158f708fb4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.128235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea405ab-2d04-4b76-859f-8158f708fb4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.131342 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2md2f" podStartSLOduration=3.771829456 podStartE2EDuration="40.131316176s" podCreationTimestamp="2026-01-27 14:14:25 +0000 UTC" firstStartedPulling="2026-01-27 14:14:27.668673091 +0000 UTC m=+169.319997493" lastFinishedPulling="2026-01-27 14:15:04.028159811 +0000 UTC m=+205.679484213" observedRunningTime="2026-01-27 14:15:05.122143535 +0000 UTC m=+206.773467947" watchObservedRunningTime="2026-01-27 14:15:05.131316176 +0000 UTC m=+206.782640578" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.138901 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.187619 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v4799" podStartSLOduration=10.478250404 podStartE2EDuration="37.187601042s" podCreationTimestamp="2026-01-27 14:14:28 +0000 UTC" firstStartedPulling="2026-01-27 14:14:37.676646806 +0000 UTC m=+179.327971218" lastFinishedPulling="2026-01-27 14:15:04.385997454 +0000 UTC m=+206.037321856" observedRunningTime="2026-01-27 14:15:05.185754196 +0000 UTC m=+206.837078598" watchObservedRunningTime="2026-01-27 14:15:05.187601042 +0000 UTC m=+206.838925444" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.230369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea405ab-2d04-4b76-859f-8158f708fb4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.230565 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea405ab-2d04-4b76-859f-8158f708fb4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.230655 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea405ab-2d04-4b76-859f-8158f708fb4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.263325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea405ab-2d04-4b76-859f-8158f708fb4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.440659 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.861106 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.861428 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:15:05 crc kubenswrapper[4833]: I0127 14:15:05.901168 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:15:05 crc kubenswrapper[4833]: W0127 14:15:05.938413 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8ea405ab_2d04_4b76_859f_8158f708fb4b.slice/crio-ccb353bcfa215b90d019dd566de9eaa45cc94e4391b5fe342074c3f2f435f5c5 WatchSource:0}: Error finding container ccb353bcfa215b90d019dd566de9eaa45cc94e4391b5fe342074c3f2f435f5c5: Status 404 returned error can't find the container with id ccb353bcfa215b90d019dd566de9eaa45cc94e4391b5fe342074c3f2f435f5c5 Jan 27 14:15:06 crc kubenswrapper[4833]: I0127 14:15:06.078802 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" podUID="702d5ec8-6425-4a29-a344-40908ec1d15e" containerName="route-controller-manager" containerID="cri-o://dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0" gracePeriod=30 Jan 27 14:15:06 crc kubenswrapper[4833]: I0127 14:15:06.078890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ea405ab-2d04-4b76-859f-8158f708fb4b","Type":"ContainerStarted","Data":"ccb353bcfa215b90d019dd566de9eaa45cc94e4391b5fe342074c3f2f435f5c5"} Jan 27 14:15:06 crc kubenswrapper[4833]: I0127 14:15:06.078981 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" podUID="b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" containerName="controller-manager" containerID="cri-o://693b1ed7306c814b1508c924a2d31127584c0ff530ef8da332d46a83dfc8b098" gracePeriod=30 Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.008309 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.041986 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw"] Jan 27 14:15:07 crc kubenswrapper[4833]: E0127 14:15:07.042427 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702d5ec8-6425-4a29-a344-40908ec1d15e" containerName="route-controller-manager" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.042460 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="702d5ec8-6425-4a29-a344-40908ec1d15e" containerName="route-controller-manager" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.042559 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="702d5ec8-6425-4a29-a344-40908ec1d15e" containerName="route-controller-manager" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.042929 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.057904 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw"] Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.061597 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-config\") pod \"702d5ec8-6425-4a29-a344-40908ec1d15e\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.061666 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/702d5ec8-6425-4a29-a344-40908ec1d15e-serving-cert\") pod \"702d5ec8-6425-4a29-a344-40908ec1d15e\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.061691 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb78k\" (UniqueName: \"kubernetes.io/projected/702d5ec8-6425-4a29-a344-40908ec1d15e-kube-api-access-zb78k\") pod \"702d5ec8-6425-4a29-a344-40908ec1d15e\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.061711 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-client-ca\") pod \"702d5ec8-6425-4a29-a344-40908ec1d15e\" (UID: \"702d5ec8-6425-4a29-a344-40908ec1d15e\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.061922 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-client-ca\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.061950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cacbdb1-e803-4d8b-830f-130a6cd3a225-serving-cert\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.062147 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlckg\" (UniqueName: \"kubernetes.io/projected/5cacbdb1-e803-4d8b-830f-130a6cd3a225-kube-api-access-zlckg\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.062213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-config\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.062932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-client-ca" (OuterVolumeSpecName: "client-ca") pod "702d5ec8-6425-4a29-a344-40908ec1d15e" (UID: "702d5ec8-6425-4a29-a344-40908ec1d15e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.063153 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-config" (OuterVolumeSpecName: "config") pod "702d5ec8-6425-4a29-a344-40908ec1d15e" (UID: "702d5ec8-6425-4a29-a344-40908ec1d15e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.067722 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702d5ec8-6425-4a29-a344-40908ec1d15e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "702d5ec8-6425-4a29-a344-40908ec1d15e" (UID: "702d5ec8-6425-4a29-a344-40908ec1d15e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.068681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702d5ec8-6425-4a29-a344-40908ec1d15e-kube-api-access-zb78k" (OuterVolumeSpecName: "kube-api-access-zb78k") pod "702d5ec8-6425-4a29-a344-40908ec1d15e" (UID: "702d5ec8-6425-4a29-a344-40908ec1d15e"). InnerVolumeSpecName "kube-api-access-zb78k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.096013 4833 generic.go:334] "Generic (PLEG): container finished" podID="b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" containerID="693b1ed7306c814b1508c924a2d31127584c0ff530ef8da332d46a83dfc8b098" exitCode=0 Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.096100 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" event={"ID":"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785","Type":"ContainerDied","Data":"693b1ed7306c814b1508c924a2d31127584c0ff530ef8da332d46a83dfc8b098"} Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.099066 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerID="0d31d2d9cc63c980d226e202bb0121f9580e01aeb57d94c76c4e69be707ae4f5" exitCode=0 Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.099129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p655b" event={"ID":"7d47e601-7e9f-4c23-9fcf-db5356101a66","Type":"ContainerDied","Data":"0d31d2d9cc63c980d226e202bb0121f9580e01aeb57d94c76c4e69be707ae4f5"} Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.102308 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ea405ab-2d04-4b76-859f-8158f708fb4b","Type":"ContainerStarted","Data":"724501880e68d885bb47789c562c0ff317284996d19a089bb31aafed73890080"} Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.108134 4833 generic.go:334] "Generic (PLEG): container finished" podID="702d5ec8-6425-4a29-a344-40908ec1d15e" containerID="dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0" exitCode=0 Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.108221 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" event={"ID":"702d5ec8-6425-4a29-a344-40908ec1d15e","Type":"ContainerDied","Data":"dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0"} Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.108256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" event={"ID":"702d5ec8-6425-4a29-a344-40908ec1d15e","Type":"ContainerDied","Data":"33aa80d35fb3d4664731388317f9426afc1ff4f1c93718711e0c20dcf320b4ae"} Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.108281 4833 scope.go:117] "RemoveContainer" containerID="dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.108425 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.129847 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6vp8" event={"ID":"73933cea-fb47-4e10-b0d8-bb9d2f3f882f","Type":"ContainerStarted","Data":"b2dfdf006c46e1b6ac3d02259b34990f6c6b420c2049c67269126815f5beef84"} Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.149573 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.149549937 podStartE2EDuration="2.149549937s" podCreationTimestamp="2026-01-27 14:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:07.146773092 +0000 UTC m=+208.798097504" watchObservedRunningTime="2026-01-27 14:15:07.149549937 +0000 UTC m=+208.800874339" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.163708 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlckg\" (UniqueName: \"kubernetes.io/projected/5cacbdb1-e803-4d8b-830f-130a6cd3a225-kube-api-access-zlckg\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.163784 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-config\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.163860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-client-ca\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.163891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cacbdb1-e803-4d8b-830f-130a6cd3a225-serving-cert\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.163985 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.164001 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/702d5ec8-6425-4a29-a344-40908ec1d15e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.164019 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb78k\" (UniqueName: \"kubernetes.io/projected/702d5ec8-6425-4a29-a344-40908ec1d15e-kube-api-access-zb78k\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.164033 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/702d5ec8-6425-4a29-a344-40908ec1d15e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.168245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-client-ca\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.168546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-config\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.173035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cacbdb1-e803-4d8b-830f-130a6cd3a225-serving-cert\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.183139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlckg\" (UniqueName: \"kubernetes.io/projected/5cacbdb1-e803-4d8b-830f-130a6cd3a225-kube-api-access-zlckg\") pod \"route-controller-manager-59494bfd84-s4xbw\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.209869 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.212758 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.225427 4833 scope.go:117] "RemoveContainer" containerID="dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0" Jan 27 14:15:07 crc kubenswrapper[4833]: E0127 14:15:07.232026 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0\": container with ID starting with dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0 not found: ID does not exist" containerID="dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.232076 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0"} err="failed to get container status \"dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0\": rpc error: code = NotFound desc = could not find container \"dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0\": container with ID starting with dc780552575ca1d56b1590d44446ef63a04b0fb7b5f9423162e455be331a60c0 not found: ID does not exist" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.233972 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6vp8" podStartSLOduration=4.231681996 podStartE2EDuration="42.233952662s" podCreationTimestamp="2026-01-27 14:14:25 +0000 UTC" firstStartedPulling="2026-01-27 14:14:28.701743293 +0000 UTC m=+170.353067695" lastFinishedPulling="2026-01-27 14:15:06.704013959 +0000 UTC m=+208.355338361" observedRunningTime="2026-01-27 14:15:07.167775043 +0000 UTC m=+208.819099445" watchObservedRunningTime="2026-01-27 14:15:07.233952662 +0000 UTC m=+208.885277064" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.265721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-proxy-ca-bundles\") pod \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.265781 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-serving-cert\") pod \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.265930 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-client-ca\") pod \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.266035 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-config\") pod \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.266100 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dbs9\" (UniqueName: \"kubernetes.io/projected/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-kube-api-access-7dbs9\") pod \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\" (UID: \"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785\") " Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.267838 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" (UID: "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.268375 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-config" (OuterVolumeSpecName: "config") pod "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" (UID: "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.269654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" (UID: "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.272471 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" (UID: "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.275062 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-kube-api-access-7dbs9" (OuterVolumeSpecName: "kube-api-access-7dbs9") pod "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" (UID: "b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785"). InnerVolumeSpecName "kube-api-access-7dbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.287569 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj"] Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.292624 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f86b664bc-5c6mj"] Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.327394 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2md2f" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="registry-server" probeResult="failure" output=< Jan 27 14:15:07 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:15:07 crc kubenswrapper[4833]: > Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.368391 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.368799 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.368813 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dbs9\" (UniqueName: \"kubernetes.io/projected/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-kube-api-access-7dbs9\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.368837 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.368851 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:07 crc kubenswrapper[4833]: I0127 14:15:07.504681 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw"] Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.146576 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" event={"ID":"b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785","Type":"ContainerDied","Data":"a2b3b3a8485ba13552bd38046e77d4bbafa6e565308f69953178ba6b5ede17c0"} Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.146596 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c7fb56bb-4s29x" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.147143 4833 scope.go:117] "RemoveContainer" containerID="693b1ed7306c814b1508c924a2d31127584c0ff530ef8da332d46a83dfc8b098" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.161592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" event={"ID":"5cacbdb1-e803-4d8b-830f-130a6cd3a225","Type":"ContainerStarted","Data":"10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e"} Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.161686 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" event={"ID":"5cacbdb1-e803-4d8b-830f-130a6cd3a225","Type":"ContainerStarted","Data":"d6f9a208299dca3f4626695ed7c2d1070f1fc5edb7ddf9b844819704cbe3cd49"} Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.163225 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.170559 4833 generic.go:334] "Generic (PLEG): container finished" podID="8ea405ab-2d04-4b76-859f-8158f708fb4b" containerID="724501880e68d885bb47789c562c0ff317284996d19a089bb31aafed73890080" exitCode=0 Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.171618 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ea405ab-2d04-4b76-859f-8158f708fb4b","Type":"ContainerDied","Data":"724501880e68d885bb47789c562c0ff317284996d19a089bb31aafed73890080"} Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.216967 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" podStartSLOduration=4.216944721 podStartE2EDuration="4.216944721s" podCreationTimestamp="2026-01-27 14:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:08.183385357 +0000 UTC m=+209.834709759" watchObservedRunningTime="2026-01-27 14:15:08.216944721 +0000 UTC m=+209.868269113" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.218056 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-4s29x"] Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.221203 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c7fb56bb-4s29x"] Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.282178 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.284265 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.450388 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:08 crc kubenswrapper[4833]: I0127 14:15:08.548298 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.192316 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p655b" event={"ID":"7d47e601-7e9f-4c23-9fcf-db5356101a66","Type":"ContainerStarted","Data":"14725bb00fb4a0d426099b4767dfc335a7d52a6a5fc608df6879950f5349e796"} Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.193946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerStarted","Data":"c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4"} Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.216641 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p655b" podStartSLOduration=3.765233248 podStartE2EDuration="44.216621769s" podCreationTimestamp="2026-01-27 14:14:25 +0000 UTC" firstStartedPulling="2026-01-27 14:14:27.664523568 +0000 UTC m=+169.315847970" lastFinishedPulling="2026-01-27 14:15:08.115912089 +0000 UTC m=+209.767236491" observedRunningTime="2026-01-27 14:15:09.214596597 +0000 UTC m=+210.865921029" watchObservedRunningTime="2026-01-27 14:15:09.216621769 +0000 UTC m=+210.867946171" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.224726 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="702d5ec8-6425-4a29-a344-40908ec1d15e" path="/var/lib/kubelet/pods/702d5ec8-6425-4a29-a344-40908ec1d15e/volumes" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.225700 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" path="/var/lib/kubelet/pods/b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785/volumes" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.245630 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.272618 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.272673 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.476399 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.500127 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea405ab-2d04-4b76-859f-8158f708fb4b-kubelet-dir\") pod \"8ea405ab-2d04-4b76-859f-8158f708fb4b\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.500207 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea405ab-2d04-4b76-859f-8158f708fb4b-kube-api-access\") pod \"8ea405ab-2d04-4b76-859f-8158f708fb4b\" (UID: \"8ea405ab-2d04-4b76-859f-8158f708fb4b\") " Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.500559 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ea405ab-2d04-4b76-859f-8158f708fb4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8ea405ab-2d04-4b76-859f-8158f708fb4b" (UID: "8ea405ab-2d04-4b76-859f-8158f708fb4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.510252 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea405ab-2d04-4b76-859f-8158f708fb4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8ea405ab-2d04-4b76-859f-8158f708fb4b" (UID: "8ea405ab-2d04-4b76-859f-8158f708fb4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.531076 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk"] Jan 27 14:15:09 crc kubenswrapper[4833]: E0127 14:15:09.531555 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea405ab-2d04-4b76-859f-8158f708fb4b" containerName="pruner" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.531578 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea405ab-2d04-4b76-859f-8158f708fb4b" containerName="pruner" Jan 27 14:15:09 crc kubenswrapper[4833]: E0127 14:15:09.531592 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" containerName="controller-manager" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.531602 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" containerName="controller-manager" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.531728 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b9ee1d-4ff3-4f04-aff4-0d4c1386c785" containerName="controller-manager" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.531744 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea405ab-2d04-4b76-859f-8158f708fb4b" containerName="pruner" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.532202 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.537998 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.538207 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.538228 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.537999 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.538432 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.538727 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.546742 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk"] Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.546759 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.601810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b864d41d-f27c-4bee-bb0f-255af212b5de-serving-cert\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.601866 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-client-ca\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.601913 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbb9\" (UniqueName: \"kubernetes.io/projected/b864d41d-f27c-4bee-bb0f-255af212b5de-kube-api-access-grbb9\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.601947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-config\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.601973 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-proxy-ca-bundles\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.602406 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ea405ab-2d04-4b76-859f-8158f708fb4b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.602459 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ea405ab-2d04-4b76-859f-8158f708fb4b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.703651 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b864d41d-f27c-4bee-bb0f-255af212b5de-serving-cert\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.703712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-client-ca\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.703751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbb9\" (UniqueName: \"kubernetes.io/projected/b864d41d-f27c-4bee-bb0f-255af212b5de-kube-api-access-grbb9\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.703782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-config\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.703803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-proxy-ca-bundles\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.705498 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-proxy-ca-bundles\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.707308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-config\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.707720 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-client-ca\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.711001 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b864d41d-f27c-4bee-bb0f-255af212b5de-serving-cert\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.725354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbb9\" (UniqueName: \"kubernetes.io/projected/b864d41d-f27c-4bee-bb0f-255af212b5de-kube-api-access-grbb9\") pod \"controller-manager-5bbc9bc44c-jm9mk\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:09 crc kubenswrapper[4833]: I0127 14:15:09.902948 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.130896 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk"] Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.202840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" event={"ID":"b864d41d-f27c-4bee-bb0f-255af212b5de","Type":"ContainerStarted","Data":"a2ffb7e1c2e52ab525b9207c5dac15cf00fe2c6e0c11c3edb7d1d48085b013f6"} Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.207485 4833 generic.go:334] "Generic (PLEG): container finished" podID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerID="c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4" exitCode=0 Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.207531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerDied","Data":"c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4"} Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.214467 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"8ea405ab-2d04-4b76-859f-8158f708fb4b","Type":"ContainerDied","Data":"ccb353bcfa215b90d019dd566de9eaa45cc94e4391b5fe342074c3f2f435f5c5"} Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.214508 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb353bcfa215b90d019dd566de9eaa45cc94e4391b5fe342074c3f2f435f5c5" Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.214584 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.321853 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v4799" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="registry-server" probeResult="failure" output=< Jan 27 14:15:10 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:15:10 crc kubenswrapper[4833]: > Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.437393 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9fn2"] Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.907782 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.908653 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.910427 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.911367 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 14:15:10 crc kubenswrapper[4833]: I0127 14:15:10.918500 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.019069 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.019156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-var-lock\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.019210 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede7a1c3-853a-4add-9821-8229c9de4d04-kube-api-access\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.121936 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede7a1c3-853a-4add-9821-8229c9de4d04-kube-api-access\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.122418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.122523 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-var-lock\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.122555 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.122612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-var-lock\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.139597 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede7a1c3-853a-4add-9821-8229c9de4d04-kube-api-access\") pod \"installer-9-crc\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.218575 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x9fn2" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="registry-server" containerID="cri-o://a015f486febcd7c0eb1821d04217591f17e5baac65c1bb1573d10494204d7556" gracePeriod=2 Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.219635 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" event={"ID":"b864d41d-f27c-4bee-bb0f-255af212b5de","Type":"ContainerStarted","Data":"3cece26caa8e1b792a507d3e1c316d4201870270a83e49e77c95d0f089d2d424"} Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.219660 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.224618 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.229135 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.264532 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" podStartSLOduration=7.264501884 podStartE2EDuration="7.264501884s" podCreationTimestamp="2026-01-27 14:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:11.251496468 +0000 UTC m=+212.902820890" watchObservedRunningTime="2026-01-27 14:15:11.264501884 +0000 UTC m=+212.915826306" Jan 27 14:15:11 crc kubenswrapper[4833]: I0127 14:15:11.567898 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:15:12 crc kubenswrapper[4833]: I0127 14:15:12.234103 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ede7a1c3-853a-4add-9821-8229c9de4d04","Type":"ContainerStarted","Data":"ddae51302c28af3262ef18fb708e98831693742f3997b5a632bf8e05fdca0ca2"} Jan 27 14:15:14 crc kubenswrapper[4833]: I0127 14:15:14.249569 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ede7a1c3-853a-4add-9821-8229c9de4d04","Type":"ContainerStarted","Data":"b6515b7e7166abed77b72790abba7a50ab71133b20a5cbdf7de3ad9a2cb9b39a"} Jan 27 14:15:15 crc kubenswrapper[4833]: I0127 14:15:15.256995 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f2855d7-f801-45ec-b2fb-245142f74599" containerID="a015f486febcd7c0eb1821d04217591f17e5baac65c1bb1573d10494204d7556" exitCode=0 Jan 27 14:15:15 crc kubenswrapper[4833]: I0127 14:15:15.257092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerDied","Data":"a015f486febcd7c0eb1821d04217591f17e5baac65c1bb1573d10494204d7556"} Jan 27 14:15:15 crc kubenswrapper[4833]: I0127 14:15:15.276937 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.276912005 podStartE2EDuration="5.276912005s" podCreationTimestamp="2026-01-27 14:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:15.275299446 +0000 UTC m=+216.926623868" watchObservedRunningTime="2026-01-27 14:15:15.276912005 +0000 UTC m=+216.928236407" Jan 27 14:15:15 crc kubenswrapper[4833]: I0127 14:15:15.914584 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:15:15 crc kubenswrapper[4833]: I0127 14:15:15.963374 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.086505 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.086578 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.144277 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.280855 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.280920 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.304246 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:15:16 crc kubenswrapper[4833]: I0127 14:15:16.325730 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.173608 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.209705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-utilities\") pod \"6f2855d7-f801-45ec-b2fb-245142f74599\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.209758 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvpqr\" (UniqueName: \"kubernetes.io/projected/6f2855d7-f801-45ec-b2fb-245142f74599-kube-api-access-rvpqr\") pod \"6f2855d7-f801-45ec-b2fb-245142f74599\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.209886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-catalog-content\") pod \"6f2855d7-f801-45ec-b2fb-245142f74599\" (UID: \"6f2855d7-f801-45ec-b2fb-245142f74599\") " Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.211181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-utilities" (OuterVolumeSpecName: "utilities") pod "6f2855d7-f801-45ec-b2fb-245142f74599" (UID: "6f2855d7-f801-45ec-b2fb-245142f74599"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.219868 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2855d7-f801-45ec-b2fb-245142f74599-kube-api-access-rvpqr" (OuterVolumeSpecName: "kube-api-access-rvpqr") pod "6f2855d7-f801-45ec-b2fb-245142f74599" (UID: "6f2855d7-f801-45ec-b2fb-245142f74599"). InnerVolumeSpecName "kube-api-access-rvpqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.235169 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f2855d7-f801-45ec-b2fb-245142f74599" (UID: "6f2855d7-f801-45ec-b2fb-245142f74599"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.270590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9fn2" event={"ID":"6f2855d7-f801-45ec-b2fb-245142f74599","Type":"ContainerDied","Data":"0e99bba298489556239469e1b3c8eabe189bf654c59ff0d86c2f2d955619b00d"} Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.271019 4833 scope.go:117] "RemoveContainer" containerID="a015f486febcd7c0eb1821d04217591f17e5baac65c1bb1573d10494204d7556" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.270881 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9fn2" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.289704 4833 scope.go:117] "RemoveContainer" containerID="3ef033a939dd09561ac0cb963e9f4d761687b38bbffd59c21b17426405db36f0" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.300936 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9fn2"] Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.305233 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9fn2"] Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.312501 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.312536 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvpqr\" (UniqueName: \"kubernetes.io/projected/6f2855d7-f801-45ec-b2fb-245142f74599-kube-api-access-rvpqr\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.312562 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2855d7-f801-45ec-b2fb-245142f74599-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.314714 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.331207 4833 scope.go:117] "RemoveContainer" containerID="a933bc7ff1c4b9bb7b086fa720c5d2fd28f04d9ae3deb2498361a31cec0a61a5" Jan 27 14:15:17 crc kubenswrapper[4833]: I0127 14:15:17.841969 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6vp8"] Jan 27 14:15:18 crc kubenswrapper[4833]: I0127 14:15:18.445166 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p655b"] Jan 27 14:15:18 crc kubenswrapper[4833]: I0127 14:15:18.445795 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p655b" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="registry-server" containerID="cri-o://14725bb00fb4a0d426099b4767dfc335a7d52a6a5fc608df6879950f5349e796" gracePeriod=2 Jan 27 14:15:19 crc kubenswrapper[4833]: I0127 14:15:19.221403 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" path="/var/lib/kubelet/pods/6f2855d7-f801-45ec-b2fb-245142f74599/volumes" Jan 27 14:15:19 crc kubenswrapper[4833]: I0127 14:15:19.299406 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6vp8" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="registry-server" containerID="cri-o://b2dfdf006c46e1b6ac3d02259b34990f6c6b420c2049c67269126815f5beef84" gracePeriod=2 Jan 27 14:15:19 crc kubenswrapper[4833]: I0127 14:15:19.322191 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:15:19 crc kubenswrapper[4833]: I0127 14:15:19.373424 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:15:20 crc kubenswrapper[4833]: I0127 14:15:20.237033 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4799"] Jan 27 14:15:21 crc kubenswrapper[4833]: I0127 14:15:21.310408 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v4799" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="registry-server" containerID="cri-o://bf9784597716f8c1319adf62af65ea6eb203a104277d0908888257495c211d57" gracePeriod=2 Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.317765 4833 generic.go:334] "Generic (PLEG): container finished" podID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerID="b2dfdf006c46e1b6ac3d02259b34990f6c6b420c2049c67269126815f5beef84" exitCode=0 Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.317828 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6vp8" event={"ID":"73933cea-fb47-4e10-b0d8-bb9d2f3f882f","Type":"ContainerDied","Data":"b2dfdf006c46e1b6ac3d02259b34990f6c6b420c2049c67269126815f5beef84"} Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.320466 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerID="14725bb00fb4a0d426099b4767dfc335a7d52a6a5fc608df6879950f5349e796" exitCode=0 Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.320513 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p655b" event={"ID":"7d47e601-7e9f-4c23-9fcf-db5356101a66","Type":"ContainerDied","Data":"14725bb00fb4a0d426099b4767dfc335a7d52a6a5fc608df6879950f5349e796"} Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.698650 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.793328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-catalog-content\") pod \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.793463 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-utilities\") pod \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.793600 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9dk4\" (UniqueName: \"kubernetes.io/projected/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-kube-api-access-q9dk4\") pod \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\" (UID: \"73933cea-fb47-4e10-b0d8-bb9d2f3f882f\") " Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.794347 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-utilities" (OuterVolumeSpecName: "utilities") pod "73933cea-fb47-4e10-b0d8-bb9d2f3f882f" (UID: "73933cea-fb47-4e10-b0d8-bb9d2f3f882f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.801677 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-kube-api-access-q9dk4" (OuterVolumeSpecName: "kube-api-access-q9dk4") pod "73933cea-fb47-4e10-b0d8-bb9d2f3f882f" (UID: "73933cea-fb47-4e10-b0d8-bb9d2f3f882f"). InnerVolumeSpecName "kube-api-access-q9dk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.848373 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73933cea-fb47-4e10-b0d8-bb9d2f3f882f" (UID: "73933cea-fb47-4e10-b0d8-bb9d2f3f882f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.895977 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.896023 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:22 crc kubenswrapper[4833]: I0127 14:15:22.896034 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9dk4\" (UniqueName: \"kubernetes.io/projected/73933cea-fb47-4e10-b0d8-bb9d2f3f882f-kube-api-access-q9dk4\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.329462 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6vp8" event={"ID":"73933cea-fb47-4e10-b0d8-bb9d2f3f882f","Type":"ContainerDied","Data":"2a0c592cb7c7d50a93d4f2bc149c649a7a8a6ea7f33344f391ca9bcf6a2ff8cf"} Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.329513 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6vp8" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.330439 4833 scope.go:117] "RemoveContainer" containerID="b2dfdf006c46e1b6ac3d02259b34990f6c6b420c2049c67269126815f5beef84" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.333032 4833 generic.go:334] "Generic (PLEG): container finished" podID="0c56799f-d464-46ff-a6f0-23426b7385df" containerID="bf9784597716f8c1319adf62af65ea6eb203a104277d0908888257495c211d57" exitCode=0 Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.333080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerDied","Data":"bf9784597716f8c1319adf62af65ea6eb203a104277d0908888257495c211d57"} Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.346824 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6vp8"] Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.349400 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6vp8"] Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.604113 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.715094 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-utilities\") pod \"7d47e601-7e9f-4c23-9fcf-db5356101a66\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.715181 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content\") pod \"7d47e601-7e9f-4c23-9fcf-db5356101a66\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.716415 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-utilities" (OuterVolumeSpecName: "utilities") pod "7d47e601-7e9f-4c23-9fcf-db5356101a66" (UID: "7d47e601-7e9f-4c23-9fcf-db5356101a66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.716717 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fts44\" (UniqueName: \"kubernetes.io/projected/7d47e601-7e9f-4c23-9fcf-db5356101a66-kube-api-access-fts44\") pod \"7d47e601-7e9f-4c23-9fcf-db5356101a66\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.717562 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.722626 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d47e601-7e9f-4c23-9fcf-db5356101a66-kube-api-access-fts44" (OuterVolumeSpecName: "kube-api-access-fts44") pod "7d47e601-7e9f-4c23-9fcf-db5356101a66" (UID: "7d47e601-7e9f-4c23-9fcf-db5356101a66"). InnerVolumeSpecName "kube-api-access-fts44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.818905 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fts44\" (UniqueName: \"kubernetes.io/projected/7d47e601-7e9f-4c23-9fcf-db5356101a66-kube-api-access-fts44\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.919935 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d47e601-7e9f-4c23-9fcf-db5356101a66" (UID: "7d47e601-7e9f-4c23-9fcf-db5356101a66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.920148 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content\") pod \"7d47e601-7e9f-4c23-9fcf-db5356101a66\" (UID: \"7d47e601-7e9f-4c23-9fcf-db5356101a66\") " Jan 27 14:15:23 crc kubenswrapper[4833]: W0127 14:15:23.920309 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7d47e601-7e9f-4c23-9fcf-db5356101a66/volumes/kubernetes.io~empty-dir/catalog-content Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.920327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d47e601-7e9f-4c23-9fcf-db5356101a66" (UID: "7d47e601-7e9f-4c23-9fcf-db5356101a66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:23 crc kubenswrapper[4833]: I0127 14:15:23.920387 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d47e601-7e9f-4c23-9fcf-db5356101a66-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.029567 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.123840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-utilities\") pod \"0c56799f-d464-46ff-a6f0-23426b7385df\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.123898 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-catalog-content\") pod \"0c56799f-d464-46ff-a6f0-23426b7385df\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.124034 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjffp\" (UniqueName: \"kubernetes.io/projected/0c56799f-d464-46ff-a6f0-23426b7385df-kube-api-access-mjffp\") pod \"0c56799f-d464-46ff-a6f0-23426b7385df\" (UID: \"0c56799f-d464-46ff-a6f0-23426b7385df\") " Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.124736 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-utilities" (OuterVolumeSpecName: "utilities") pod "0c56799f-d464-46ff-a6f0-23426b7385df" (UID: "0c56799f-d464-46ff-a6f0-23426b7385df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.128500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c56799f-d464-46ff-a6f0-23426b7385df-kube-api-access-mjffp" (OuterVolumeSpecName: "kube-api-access-mjffp") pod "0c56799f-d464-46ff-a6f0-23426b7385df" (UID: "0c56799f-d464-46ff-a6f0-23426b7385df"). InnerVolumeSpecName "kube-api-access-mjffp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.128616 4833 scope.go:117] "RemoveContainer" containerID="fd1d3c72522b509991394a1b83b387dfcddf7a713386e8beb7b9ae1a2d3ac917" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.225928 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.226472 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjffp\" (UniqueName: \"kubernetes.io/projected/0c56799f-d464-46ff-a6f0-23426b7385df-kube-api-access-mjffp\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.294581 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c56799f-d464-46ff-a6f0-23426b7385df" (UID: "0c56799f-d464-46ff-a6f0-23426b7385df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.328540 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c56799f-d464-46ff-a6f0-23426b7385df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.347203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4799" event={"ID":"0c56799f-d464-46ff-a6f0-23426b7385df","Type":"ContainerDied","Data":"3facb04bfca6291c54b4d393ff4ba5f399030ca209a44447fc8b9fd535156cfe"} Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.347407 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4799" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.351420 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p655b" event={"ID":"7d47e601-7e9f-4c23-9fcf-db5356101a66","Type":"ContainerDied","Data":"473076b222dd33480a8331fbafc77bf785cf70465506bb22f2e6a0b7031e7aac"} Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.351615 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p655b" Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.390591 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4799"] Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.396174 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v4799"] Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.400214 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p655b"] Jan 27 14:15:24 crc kubenswrapper[4833]: I0127 14:15:24.406844 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p655b"] Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.126590 4833 scope.go:117] "RemoveContainer" containerID="3a18beb562e0bccc49e66f2d905f2b2fea669204c36b018ee237cdc1dca000a1" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.219648 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" path="/var/lib/kubelet/pods/0c56799f-d464-46ff-a6f0-23426b7385df/volumes" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.220349 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" path="/var/lib/kubelet/pods/73933cea-fb47-4e10-b0d8-bb9d2f3f882f/volumes" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.221113 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" path="/var/lib/kubelet/pods/7d47e601-7e9f-4c23-9fcf-db5356101a66/volumes" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.832628 4833 scope.go:117] "RemoveContainer" containerID="bf9784597716f8c1319adf62af65ea6eb203a104277d0908888257495c211d57" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.869810 4833 scope.go:117] "RemoveContainer" containerID="2aecb4f63c997cdd1e4e6bee335232bf1bf81d629e3ca1cec466b4ea9df3e677" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.909079 4833 scope.go:117] "RemoveContainer" containerID="a989d4193b3358c4993ab9c8e0f7253593dc46c2dc18bf1e60537a1c7b87d728" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.942462 4833 scope.go:117] "RemoveContainer" containerID="14725bb00fb4a0d426099b4767dfc335a7d52a6a5fc608df6879950f5349e796" Jan 27 14:15:25 crc kubenswrapper[4833]: I0127 14:15:25.980004 4833 scope.go:117] "RemoveContainer" containerID="0d31d2d9cc63c980d226e202bb0121f9580e01aeb57d94c76c4e69be707ae4f5" Jan 27 14:15:26 crc kubenswrapper[4833]: I0127 14:15:26.016467 4833 scope.go:117] "RemoveContainer" containerID="23fdfeeacabae801d03c4b86aec8f71204168b27363a9adb0b978af5c0c2697b" Jan 27 14:15:27 crc kubenswrapper[4833]: I0127 14:15:27.383582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerStarted","Data":"dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b"} Jan 27 14:15:27 crc kubenswrapper[4833]: I0127 14:15:27.386979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerStarted","Data":"4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c"} Jan 27 14:15:27 crc kubenswrapper[4833]: I0127 14:15:27.389991 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerStarted","Data":"2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41"} Jan 27 14:15:27 crc kubenswrapper[4833]: I0127 14:15:27.409223 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c5pcs" podStartSLOduration=4.946210927 podStartE2EDuration="1m2.409200363s" podCreationTimestamp="2026-01-27 14:14:25 +0000 UTC" firstStartedPulling="2026-01-27 14:14:27.663581482 +0000 UTC m=+169.314905884" lastFinishedPulling="2026-01-27 14:15:25.126570908 +0000 UTC m=+226.777895320" observedRunningTime="2026-01-27 14:15:27.40715636 +0000 UTC m=+229.058480762" watchObservedRunningTime="2026-01-27 14:15:27.409200363 +0000 UTC m=+229.060524765" Jan 27 14:15:28 crc kubenswrapper[4833]: I0127 14:15:28.397116 4833 generic.go:334] "Generic (PLEG): container finished" podID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerID="2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41" exitCode=0 Jan 27 14:15:28 crc kubenswrapper[4833]: I0127 14:15:28.397183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerDied","Data":"2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41"} Jan 27 14:15:28 crc kubenswrapper[4833]: I0127 14:15:28.399419 4833 generic.go:334] "Generic (PLEG): container finished" podID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerID="4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c" exitCode=0 Jan 27 14:15:28 crc kubenswrapper[4833]: I0127 14:15:28.399471 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerDied","Data":"4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c"} Jan 27 14:15:30 crc kubenswrapper[4833]: I0127 14:15:30.413630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerStarted","Data":"f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954"} Jan 27 14:15:31 crc kubenswrapper[4833]: I0127 14:15:31.419260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerStarted","Data":"1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41"} Jan 27 14:15:31 crc kubenswrapper[4833]: I0127 14:15:31.439111 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9thj" podStartSLOduration=3.421162378 podStartE2EDuration="1m4.439096447s" podCreationTimestamp="2026-01-27 14:14:27 +0000 UTC" firstStartedPulling="2026-01-27 14:14:28.690898248 +0000 UTC m=+170.342222650" lastFinishedPulling="2026-01-27 14:15:29.708832327 +0000 UTC m=+231.360156719" observedRunningTime="2026-01-27 14:15:31.43657757 +0000 UTC m=+233.087901972" watchObservedRunningTime="2026-01-27 14:15:31.439096447 +0000 UTC m=+233.090420849" Jan 27 14:15:31 crc kubenswrapper[4833]: I0127 14:15:31.456873 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9g5pw" podStartSLOduration=3.633232231 podStartE2EDuration="1m3.456851998s" podCreationTimestamp="2026-01-27 14:14:28 +0000 UTC" firstStartedPulling="2026-01-27 14:14:30.786752245 +0000 UTC m=+172.438076647" lastFinishedPulling="2026-01-27 14:15:30.610372012 +0000 UTC m=+232.261696414" observedRunningTime="2026-01-27 14:15:31.455306241 +0000 UTC m=+233.106630663" watchObservedRunningTime="2026-01-27 14:15:31.456851998 +0000 UTC m=+233.108176400" Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.260966 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.261045 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.261091 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.261918 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.262015 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b" gracePeriod=600 Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.429339 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b" exitCode=0 Jan 27 14:15:32 crc kubenswrapper[4833]: I0127 14:15:32.429398 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b"} Jan 27 14:15:33 crc kubenswrapper[4833]: I0127 14:15:33.437817 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"5552fcd59dd0d2efb42e28a33e8a77a2a749dd5d883a8a954866c7b6125815a5"} Jan 27 14:15:35 crc kubenswrapper[4833]: I0127 14:15:35.702970 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:15:35 crc kubenswrapper[4833]: I0127 14:15:35.704804 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:15:35 crc kubenswrapper[4833]: I0127 14:15:35.744025 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:15:36 crc kubenswrapper[4833]: I0127 14:15:36.501573 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:15:37 crc kubenswrapper[4833]: I0127 14:15:37.546618 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-pbxl9"] Jan 27 14:15:37 crc kubenswrapper[4833]: I0127 14:15:37.870278 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:15:37 crc kubenswrapper[4833]: I0127 14:15:37.870339 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:15:37 crc kubenswrapper[4833]: I0127 14:15:37.909047 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:15:38 crc kubenswrapper[4833]: I0127 14:15:38.515298 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:15:38 crc kubenswrapper[4833]: I0127 14:15:38.912072 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:15:38 crc kubenswrapper[4833]: I0127 14:15:38.912120 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:15:38 crc kubenswrapper[4833]: I0127 14:15:38.959077 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:15:39 crc kubenswrapper[4833]: I0127 14:15:39.515998 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.340830 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk"] Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.342266 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" podUID="b864d41d-f27c-4bee-bb0f-255af212b5de" containerName="controller-manager" containerID="cri-o://3cece26caa8e1b792a507d3e1c316d4201870270a83e49e77c95d0f089d2d424" gracePeriod=30 Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.437220 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw"] Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.437456 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" podUID="5cacbdb1-e803-4d8b-830f-130a6cd3a225" containerName="route-controller-manager" containerID="cri-o://10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e" gracePeriod=30 Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.504780 4833 generic.go:334] "Generic (PLEG): container finished" podID="b864d41d-f27c-4bee-bb0f-255af212b5de" containerID="3cece26caa8e1b792a507d3e1c316d4201870270a83e49e77c95d0f089d2d424" exitCode=0 Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.504865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" event={"ID":"b864d41d-f27c-4bee-bb0f-255af212b5de","Type":"ContainerDied","Data":"3cece26caa8e1b792a507d3e1c316d4201870270a83e49e77c95d0f089d2d424"} Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.920320 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:44 crc kubenswrapper[4833]: I0127 14:15:44.923929 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012359 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-client-ca\") pod \"b864d41d-f27c-4bee-bb0f-255af212b5de\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-client-ca\") pod \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012531 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-proxy-ca-bundles\") pod \"b864d41d-f27c-4bee-bb0f-255af212b5de\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012570 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-config\") pod \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-config\") pod \"b864d41d-f27c-4bee-bb0f-255af212b5de\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012669 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlckg\" (UniqueName: \"kubernetes.io/projected/5cacbdb1-e803-4d8b-830f-130a6cd3a225-kube-api-access-zlckg\") pod \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012704 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b864d41d-f27c-4bee-bb0f-255af212b5de-serving-cert\") pod \"b864d41d-f27c-4bee-bb0f-255af212b5de\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012729 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cacbdb1-e803-4d8b-830f-130a6cd3a225-serving-cert\") pod \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\" (UID: \"5cacbdb1-e803-4d8b-830f-130a6cd3a225\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.012753 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbb9\" (UniqueName: \"kubernetes.io/projected/b864d41d-f27c-4bee-bb0f-255af212b5de-kube-api-access-grbb9\") pod \"b864d41d-f27c-4bee-bb0f-255af212b5de\" (UID: \"b864d41d-f27c-4bee-bb0f-255af212b5de\") " Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.013313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-client-ca" (OuterVolumeSpecName: "client-ca") pod "b864d41d-f27c-4bee-bb0f-255af212b5de" (UID: "b864d41d-f27c-4bee-bb0f-255af212b5de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.013376 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-config" (OuterVolumeSpecName: "config") pod "b864d41d-f27c-4bee-bb0f-255af212b5de" (UID: "b864d41d-f27c-4bee-bb0f-255af212b5de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.014021 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-client-ca" (OuterVolumeSpecName: "client-ca") pod "5cacbdb1-e803-4d8b-830f-130a6cd3a225" (UID: "5cacbdb1-e803-4d8b-830f-130a6cd3a225"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.014241 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-config" (OuterVolumeSpecName: "config") pod "5cacbdb1-e803-4d8b-830f-130a6cd3a225" (UID: "5cacbdb1-e803-4d8b-830f-130a6cd3a225"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.014341 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b864d41d-f27c-4bee-bb0f-255af212b5de" (UID: "b864d41d-f27c-4bee-bb0f-255af212b5de"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.018629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b864d41d-f27c-4bee-bb0f-255af212b5de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b864d41d-f27c-4bee-bb0f-255af212b5de" (UID: "b864d41d-f27c-4bee-bb0f-255af212b5de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.019150 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cacbdb1-e803-4d8b-830f-130a6cd3a225-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5cacbdb1-e803-4d8b-830f-130a6cd3a225" (UID: "5cacbdb1-e803-4d8b-830f-130a6cd3a225"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.019183 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cacbdb1-e803-4d8b-830f-130a6cd3a225-kube-api-access-zlckg" (OuterVolumeSpecName: "kube-api-access-zlckg") pod "5cacbdb1-e803-4d8b-830f-130a6cd3a225" (UID: "5cacbdb1-e803-4d8b-830f-130a6cd3a225"). InnerVolumeSpecName "kube-api-access-zlckg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.019334 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b864d41d-f27c-4bee-bb0f-255af212b5de-kube-api-access-grbb9" (OuterVolumeSpecName: "kube-api-access-grbb9") pod "b864d41d-f27c-4bee-bb0f-255af212b5de" (UID: "b864d41d-f27c-4bee-bb0f-255af212b5de"). InnerVolumeSpecName "kube-api-access-grbb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113914 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113945 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113957 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113967 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlckg\" (UniqueName: \"kubernetes.io/projected/5cacbdb1-e803-4d8b-830f-130a6cd3a225-kube-api-access-zlckg\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113977 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b864d41d-f27c-4bee-bb0f-255af212b5de-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113986 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cacbdb1-e803-4d8b-830f-130a6cd3a225-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.113994 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbb9\" (UniqueName: \"kubernetes.io/projected/b864d41d-f27c-4bee-bb0f-255af212b5de-kube-api-access-grbb9\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.114003 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b864d41d-f27c-4bee-bb0f-255af212b5de-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.114012 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5cacbdb1-e803-4d8b-830f-130a6cd3a225-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.510789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" event={"ID":"b864d41d-f27c-4bee-bb0f-255af212b5de","Type":"ContainerDied","Data":"a2ffb7e1c2e52ab525b9207c5dac15cf00fe2c6e0c11c3edb7d1d48085b013f6"} Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.510851 4833 scope.go:117] "RemoveContainer" containerID="3cece26caa8e1b792a507d3e1c316d4201870270a83e49e77c95d0f089d2d424" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.512663 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.513228 4833 generic.go:334] "Generic (PLEG): container finished" podID="5cacbdb1-e803-4d8b-830f-130a6cd3a225" containerID="10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e" exitCode=0 Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.513271 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" event={"ID":"5cacbdb1-e803-4d8b-830f-130a6cd3a225","Type":"ContainerDied","Data":"10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e"} Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.513304 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" event={"ID":"5cacbdb1-e803-4d8b-830f-130a6cd3a225","Type":"ContainerDied","Data":"d6f9a208299dca3f4626695ed7c2d1070f1fc5edb7ddf9b844819704cbe3cd49"} Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.513305 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.528187 4833 scope.go:117] "RemoveContainer" containerID="10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.532125 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.535410 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bbc9bc44c-jm9mk"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540054 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dd8668d58-wsqb8"] Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540230 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540241 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540248 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b864d41d-f27c-4bee-bb0f-255af212b5de" containerName="controller-manager" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540255 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b864d41d-f27c-4bee-bb0f-255af212b5de" containerName="controller-manager" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540263 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540269 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540281 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cacbdb1-e803-4d8b-830f-130a6cd3a225" containerName="route-controller-manager" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540286 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cacbdb1-e803-4d8b-830f-130a6cd3a225" containerName="route-controller-manager" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540295 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540300 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540307 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540313 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540320 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540325 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540332 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540337 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540348 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540355 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540362 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540369 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540378 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540385 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540396 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540401 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540409 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540415 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="extract-utilities" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.540422 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540428 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="extract-content" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540533 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="73933cea-fb47-4e10-b0d8-bb9d2f3f882f" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540542 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cacbdb1-e803-4d8b-830f-130a6cd3a225" containerName="route-controller-manager" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540555 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b864d41d-f27c-4bee-bb0f-255af212b5de" containerName="controller-manager" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540563 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d47e601-7e9f-4c23-9fcf-db5356101a66" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540570 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c56799f-d464-46ff-a6f0-23426b7385df" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540577 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f2855d7-f801-45ec-b2fb-245142f74599" containerName="registry-server" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.540891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.542879 4833 scope.go:117] "RemoveContainer" containerID="10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.543179 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.543259 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:15:45 crc kubenswrapper[4833]: E0127 14:15:45.543254 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e\": container with ID starting with 10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e not found: ID does not exist" containerID="10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.543345 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.543343 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e"} err="failed to get container status \"10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e\": rpc error: code = NotFound desc = could not find container \"10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e\": container with ID starting with 10ebb28b198b240b82cd254b30af725d6071b34fa1506ad1c2d1e5ac98728d8e not found: ID does not exist" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.543392 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.543593 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.544406 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.549736 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.550246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.553392 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.553606 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.553761 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.553839 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.554735 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.555102 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.557516 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.561514 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.571834 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59494bfd84-s4xbw"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.585125 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.585216 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd8668d58-wsqb8"] Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-serving-cert\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-proxy-ca-bundles\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625120 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-config\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-client-ca\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625217 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a51624-cf21-46ec-9894-8b91af168053-serving-cert\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625250 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnvg\" (UniqueName: \"kubernetes.io/projected/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-kube-api-access-fnnvg\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-config\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625337 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq5bh\" (UniqueName: \"kubernetes.io/projected/b3a51624-cf21-46ec-9894-8b91af168053-kube-api-access-kq5bh\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.625353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-client-ca\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726735 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-serving-cert\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-proxy-ca-bundles\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726805 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-config\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-client-ca\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726871 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a51624-cf21-46ec-9894-8b91af168053-serving-cert\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnvg\" (UniqueName: \"kubernetes.io/projected/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-kube-api-access-fnnvg\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726935 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-config\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq5bh\" (UniqueName: \"kubernetes.io/projected/b3a51624-cf21-46ec-9894-8b91af168053-kube-api-access-kq5bh\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.726965 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-client-ca\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.727855 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-client-ca\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.728547 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-client-ca\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.729131 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-proxy-ca-bundles\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.729699 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-config\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.730641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-config\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.735264 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a51624-cf21-46ec-9894-8b91af168053-serving-cert\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.735659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-serving-cert\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.747094 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnvg\" (UniqueName: \"kubernetes.io/projected/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-kube-api-access-fnnvg\") pod \"route-controller-manager-57576445d7-mrsst\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.755108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq5bh\" (UniqueName: \"kubernetes.io/projected/b3a51624-cf21-46ec-9894-8b91af168053-kube-api-access-kq5bh\") pod \"controller-manager-6dd8668d58-wsqb8\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.866317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:45 crc kubenswrapper[4833]: I0127 14:15:45.906453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.151724 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst"] Jan 27 14:15:46 crc kubenswrapper[4833]: W0127 14:15:46.160886 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a901c93_41b5_4767_92b0_e5ac4f9b4a2c.slice/crio-bbad0e344021534927e1d5d2fd9c31d82e04f3d68dae0c30229c0a736f4054c4 WatchSource:0}: Error finding container bbad0e344021534927e1d5d2fd9c31d82e04f3d68dae0c30229c0a736f4054c4: Status 404 returned error can't find the container with id bbad0e344021534927e1d5d2fd9c31d82e04f3d68dae0c30229c0a736f4054c4 Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.290114 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dd8668d58-wsqb8"] Jan 27 14:15:46 crc kubenswrapper[4833]: W0127 14:15:46.300003 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3a51624_cf21_46ec_9894_8b91af168053.slice/crio-61861eaf67def11002ad749143e6af83ce13ccee6690eae9c866997a56d66397 WatchSource:0}: Error finding container 61861eaf67def11002ad749143e6af83ce13ccee6690eae9c866997a56d66397: Status 404 returned error can't find the container with id 61861eaf67def11002ad749143e6af83ce13ccee6690eae9c866997a56d66397 Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.520782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" event={"ID":"b3a51624-cf21-46ec-9894-8b91af168053","Type":"ContainerStarted","Data":"a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36"} Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.520830 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" event={"ID":"b3a51624-cf21-46ec-9894-8b91af168053","Type":"ContainerStarted","Data":"61861eaf67def11002ad749143e6af83ce13ccee6690eae9c866997a56d66397"} Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.520854 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.522919 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" event={"ID":"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c","Type":"ContainerStarted","Data":"5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6"} Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.522955 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" event={"ID":"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c","Type":"ContainerStarted","Data":"bbad0e344021534927e1d5d2fd9c31d82e04f3d68dae0c30229c0a736f4054c4"} Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.523424 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.525718 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.543397 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" podStartSLOduration=2.543383606 podStartE2EDuration="2.543383606s" podCreationTimestamp="2026-01-27 14:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:46.541673263 +0000 UTC m=+248.192997675" watchObservedRunningTime="2026-01-27 14:15:46.543383606 +0000 UTC m=+248.194708008" Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.590867 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" podStartSLOduration=2.590851834 podStartE2EDuration="2.590851834s" podCreationTimestamp="2026-01-27 14:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:15:46.588139161 +0000 UTC m=+248.239463563" watchObservedRunningTime="2026-01-27 14:15:46.590851834 +0000 UTC m=+248.242176236" Jan 27 14:15:46 crc kubenswrapper[4833]: I0127 14:15:46.686608 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:15:47 crc kubenswrapper[4833]: I0127 14:15:47.219616 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cacbdb1-e803-4d8b-830f-130a6cd3a225" path="/var/lib/kubelet/pods/5cacbdb1-e803-4d8b-830f-130a6cd3a225/volumes" Jan 27 14:15:47 crc kubenswrapper[4833]: I0127 14:15:47.221293 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b864d41d-f27c-4bee-bb0f-255af212b5de" path="/var/lib/kubelet/pods/b864d41d-f27c-4bee-bb0f-255af212b5de/volumes" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.324398 4833 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.325084 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293" gracePeriod=15 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.325194 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661" gracePeriod=15 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.325257 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8" gracePeriod=15 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.325034 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211" gracePeriod=15 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327145 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327337 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08" gracePeriod=15 Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327703 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327758 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327774 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327784 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327824 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327834 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327844 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327853 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327873 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327909 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327923 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327934 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.327943 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.327952 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328180 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328248 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328267 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328282 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328296 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328327 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328341 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.328521 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.328534 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.331079 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.331878 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.342025 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.369938 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409632 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409649 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409674 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409688 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.409748 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.495307 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.511592 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.511897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.512029 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.512129 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.512193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.512090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.511968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.511735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.512384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.512722 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.513073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.513179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.513482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.513608 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.513936 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.513960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.551696 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.553633 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.554295 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08" exitCode=0 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.554320 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661" exitCode=0 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.554329 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293" exitCode=0 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.554337 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8" exitCode=2 Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.554373 4833 scope.go:117] "RemoveContainer" containerID="07477fe9d1f8d33bcd654034c2cfe31f3c97e17bc08a65a83c0e22ab9ee27575" Jan 27 14:15:51 crc kubenswrapper[4833]: I0127 14:15:51.666861 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:15:51 crc kubenswrapper[4833]: E0127 14:15:51.701467 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9c1cefdce223 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:15:51.700431395 +0000 UTC m=+253.351755797,LastTimestamp:2026-01-27 14:15:51.700431395 +0000 UTC m=+253.351755797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.564161 4833 generic.go:334] "Generic (PLEG): container finished" podID="ede7a1c3-853a-4add-9821-8229c9de4d04" containerID="b6515b7e7166abed77b72790abba7a50ab71133b20a5cbdf7de3ad9a2cb9b39a" exitCode=0 Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.564265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ede7a1c3-853a-4add-9821-8229c9de4d04","Type":"ContainerDied","Data":"b6515b7e7166abed77b72790abba7a50ab71133b20a5cbdf7de3ad9a2cb9b39a"} Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.565057 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.565336 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.565932 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"67ce8cf455c81a74eb4db83800d9474a9ba7c01ef0092ca876001734a0a046ae"} Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.565965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"dc14abcaacbaee00d4407d708fe6237b360e0ac7c9a3bde5628ea38f20d939af"} Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.566631 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.567361 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:52 crc kubenswrapper[4833]: I0127 14:15:52.571089 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.803909 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.805375 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.806380 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.807051 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.807413 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.948329 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.949123 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.949851 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.950179 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954436 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954479 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954768 4833 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954788 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:53 crc kubenswrapper[4833]: I0127 14:15:53.954803 4833 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.056110 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-kubelet-dir\") pod \"ede7a1c3-853a-4add-9821-8229c9de4d04\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.056173 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-var-lock\") pod \"ede7a1c3-853a-4add-9821-8229c9de4d04\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.056243 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede7a1c3-853a-4add-9821-8229c9de4d04-kube-api-access\") pod \"ede7a1c3-853a-4add-9821-8229c9de4d04\" (UID: \"ede7a1c3-853a-4add-9821-8229c9de4d04\") " Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.056609 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-var-lock" (OuterVolumeSpecName: "var-lock") pod "ede7a1c3-853a-4add-9821-8229c9de4d04" (UID: "ede7a1c3-853a-4add-9821-8229c9de4d04"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.056621 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ede7a1c3-853a-4add-9821-8229c9de4d04" (UID: "ede7a1c3-853a-4add-9821-8229c9de4d04"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.061710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede7a1c3-853a-4add-9821-8229c9de4d04-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ede7a1c3-853a-4add-9821-8229c9de4d04" (UID: "ede7a1c3-853a-4add-9821-8229c9de4d04"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.157518 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.157561 4833 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ede7a1c3-853a-4add-9821-8229c9de4d04-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.157570 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ede7a1c3-853a-4add-9821-8229c9de4d04-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.584575 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ede7a1c3-853a-4add-9821-8229c9de4d04","Type":"ContainerDied","Data":"ddae51302c28af3262ef18fb708e98831693742f3997b5a632bf8e05fdca0ca2"} Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.585018 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddae51302c28af3262ef18fb708e98831693742f3997b5a632bf8e05fdca0ca2" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.584756 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.588678 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.589238 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211" exitCode=0 Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.589288 4833 scope.go:117] "RemoveContainer" containerID="7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.589420 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.607994 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.608821 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.609354 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.614759 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.614937 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.615101 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.615688 4833 scope.go:117] "RemoveContainer" containerID="df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.629705 4833 scope.go:117] "RemoveContainer" containerID="79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.643476 4833 scope.go:117] "RemoveContainer" containerID="22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.667170 4833 scope.go:117] "RemoveContainer" containerID="b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.701087 4833 scope.go:117] "RemoveContainer" containerID="cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.731205 4833 scope.go:117] "RemoveContainer" containerID="7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08" Jan 27 14:15:54 crc kubenswrapper[4833]: E0127 14:15:54.732112 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\": container with ID starting with 7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08 not found: ID does not exist" containerID="7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.732169 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08"} err="failed to get container status \"7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\": rpc error: code = NotFound desc = could not find container \"7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08\": container with ID starting with 7a7b7a1b6d6a31451384e6fad0b9d129e357fbdc15d0b147a7a940509a07bd08 not found: ID does not exist" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.732202 4833 scope.go:117] "RemoveContainer" containerID="df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661" Jan 27 14:15:54 crc kubenswrapper[4833]: E0127 14:15:54.732608 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\": container with ID starting with df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661 not found: ID does not exist" containerID="df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.732656 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661"} err="failed to get container status \"df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\": rpc error: code = NotFound desc = could not find container \"df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661\": container with ID starting with df577ca1411fb65ef6c04fcc4a47d1714ea4f53aaab2aef5de5b0d9632ae5661 not found: ID does not exist" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.732690 4833 scope.go:117] "RemoveContainer" containerID="79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293" Jan 27 14:15:54 crc kubenswrapper[4833]: E0127 14:15:54.733105 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\": container with ID starting with 79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293 not found: ID does not exist" containerID="79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.733132 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293"} err="failed to get container status \"79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\": rpc error: code = NotFound desc = could not find container \"79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293\": container with ID starting with 79e58319b7806ecc19d3bc0019eaf1342818d29f9c170d763aa969bf71983293 not found: ID does not exist" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.733150 4833 scope.go:117] "RemoveContainer" containerID="22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8" Jan 27 14:15:54 crc kubenswrapper[4833]: E0127 14:15:54.733578 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\": container with ID starting with 22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8 not found: ID does not exist" containerID="22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.733687 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8"} err="failed to get container status \"22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\": rpc error: code = NotFound desc = could not find container \"22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8\": container with ID starting with 22a0f9039193292029e01685936a330aaad087beefc853d9defa7ea156accbd8 not found: ID does not exist" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.733764 4833 scope.go:117] "RemoveContainer" containerID="b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211" Jan 27 14:15:54 crc kubenswrapper[4833]: E0127 14:15:54.734125 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\": container with ID starting with b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211 not found: ID does not exist" containerID="b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.734194 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211"} err="failed to get container status \"b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\": rpc error: code = NotFound desc = could not find container \"b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211\": container with ID starting with b0dece4192ecc1cbb60974efa4bc5aab28f21267581ea2a66e2abd0c62c51211 not found: ID does not exist" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.734222 4833 scope.go:117] "RemoveContainer" containerID="cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a" Jan 27 14:15:54 crc kubenswrapper[4833]: E0127 14:15:54.734603 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\": container with ID starting with cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a not found: ID does not exist" containerID="cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a" Jan 27 14:15:54 crc kubenswrapper[4833]: I0127 14:15:54.734634 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a"} err="failed to get container status \"cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\": rpc error: code = NotFound desc = could not find container \"cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a\": container with ID starting with cd3786c74523ca2f36a32c60f3e025e404d3137f693d6dde8081a636e6865a0a not found: ID does not exist" Jan 27 14:15:55 crc kubenswrapper[4833]: I0127 14:15:55.232514 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.120362 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.120816 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.121348 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.121649 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.121933 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:56 crc kubenswrapper[4833]: I0127 14:15:56.121976 4833 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.122232 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="200ms" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.323379 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="400ms" Jan 27 14:15:56 crc kubenswrapper[4833]: E0127 14:15:56.724493 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="800ms" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.525142 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="1.6s" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.959664 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:15:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:15:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:15:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:15:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.960206 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.960529 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.960934 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.961705 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:57 crc kubenswrapper[4833]: E0127 14:15:57.961737 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:15:59 crc kubenswrapper[4833]: E0127 14:15:59.127628 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="3.2s" Jan 27 14:15:59 crc kubenswrapper[4833]: I0127 14:15:59.214409 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:15:59 crc kubenswrapper[4833]: I0127 14:15:59.215326 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:00 crc kubenswrapper[4833]: E0127 14:16:00.228024 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.128:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9c1cefdce223 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:15:51.700431395 +0000 UTC m=+253.351755797,LastTimestamp:2026-01-27 14:15:51.700431395 +0000 UTC m=+253.351755797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:16:02 crc kubenswrapper[4833]: E0127 14:16:02.329231 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.128:6443: connect: connection refused" interval="6.4s" Jan 27 14:16:02 crc kubenswrapper[4833]: I0127 14:16:02.574540 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" podUID="810332c8-987e-485f-9940-d1b61944b1a8" containerName="oauth-openshift" containerID="cri-o://c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4" gracePeriod=15 Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.019304 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.020199 4833 status_manager.go:851] "Failed to get status for pod" podUID="810332c8-987e-485f-9940-d1b61944b1a8" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pbxl9\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.021013 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.021324 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.077709 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-audit-policies\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.077810 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-error\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.077914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-login\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.077993 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-cliconfig\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078051 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-ocp-branding-template\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078081 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k2lc\" (UniqueName: \"kubernetes.io/projected/810332c8-987e-485f-9940-d1b61944b1a8-kube-api-access-9k2lc\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078110 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-trusted-ca-bundle\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-router-certs\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810332c8-987e-485f-9940-d1b61944b1a8-audit-dir\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078190 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-service-ca\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078210 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-idp-0-file-data\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-serving-cert\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-session\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-provider-selection\") pod \"810332c8-987e-485f-9940-d1b61944b1a8\" (UID: \"810332c8-987e-485f-9940-d1b61944b1a8\") " Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078616 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078784 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.078855 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.079111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.079164 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/810332c8-987e-485f-9940-d1b61944b1a8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.084059 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810332c8-987e-485f-9940-d1b61944b1a8-kube-api-access-9k2lc" (OuterVolumeSpecName: "kube-api-access-9k2lc") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "kube-api-access-9k2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.088332 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.088437 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.088792 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.089122 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.089228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.089419 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.089592 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.089888 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "810332c8-987e-485f-9940-d1b61944b1a8" (UID: "810332c8-987e-485f-9940-d1b61944b1a8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180211 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180271 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180287 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180301 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180315 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k2lc\" (UniqueName: \"kubernetes.io/projected/810332c8-987e-485f-9940-d1b61944b1a8-kube-api-access-9k2lc\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180327 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180338 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180349 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/810332c8-987e-485f-9940-d1b61944b1a8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180362 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180372 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180381 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180390 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.180400 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/810332c8-987e-485f-9940-d1b61944b1a8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.210301 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.210844 4833 status_manager.go:851] "Failed to get status for pod" podUID="810332c8-987e-485f-9940-d1b61944b1a8" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pbxl9\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.211203 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.211630 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.223633 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.223667 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:03 crc kubenswrapper[4833]: E0127 14:16:03.224092 4833 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.224592 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.649855 4833 generic.go:334] "Generic (PLEG): container finished" podID="810332c8-987e-485f-9940-d1b61944b1a8" containerID="c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4" exitCode=0 Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.649954 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.649986 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" event={"ID":"810332c8-987e-485f-9940-d1b61944b1a8","Type":"ContainerDied","Data":"c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4"} Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.650392 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" event={"ID":"810332c8-987e-485f-9940-d1b61944b1a8","Type":"ContainerDied","Data":"390bad0051d4ea4277ca5a892682754120dfb7df134d93d78c6bb641c137b6d2"} Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.650424 4833 scope.go:117] "RemoveContainer" containerID="c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.650963 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.651735 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.652573 4833 status_manager.go:851] "Failed to get status for pod" podUID="810332c8-987e-485f-9940-d1b61944b1a8" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pbxl9\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.653191 4833 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2db8b892f48062681db8b041e07c45f63297171a11b95c3ff9ad4a9ae95ca8b9" exitCode=0 Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.653222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2db8b892f48062681db8b041e07c45f63297171a11b95c3ff9ad4a9ae95ca8b9"} Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.653246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f5850bd3bca924c535a1dbfa82d9868b64bc2588247eff0e7b0d80cf19631d39"} Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.653454 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.653468 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.653966 4833 status_manager.go:851] "Failed to get status for pod" podUID="810332c8-987e-485f-9940-d1b61944b1a8" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pbxl9\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: E0127 14:16:03.654210 4833 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.654277 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.654622 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.656083 4833 status_manager.go:851] "Failed to get status for pod" podUID="810332c8-987e-485f-9940-d1b61944b1a8" pod="openshift-authentication/oauth-openshift-558db77b4-pbxl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-pbxl9\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.657461 4833 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.657758 4833 status_manager.go:851] "Failed to get status for pod" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.128:6443: connect: connection refused" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.671992 4833 scope.go:117] "RemoveContainer" containerID="c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4" Jan 27 14:16:03 crc kubenswrapper[4833]: E0127 14:16:03.673180 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4\": container with ID starting with c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4 not found: ID does not exist" containerID="c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4" Jan 27 14:16:03 crc kubenswrapper[4833]: I0127 14:16:03.673231 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4"} err="failed to get container status \"c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4\": rpc error: code = NotFound desc = could not find container \"c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4\": container with ID starting with c3ecee57885acb863da3d1887503e3907fdfdd26a2f8fa0ab137d639ac6b08b4 not found: ID does not exist" Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.662161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6434d83655e20d219ba38a31adaca22646adbe5d38ecab407b63028a7c2f5e14"} Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.662218 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f3b4ec56d445fe3136974a4938590cc59b0e4d1693301b84fc375c4a2d805402"} Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.662242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f604de47bbd17a80bcf6e02f3de3d4ac61055c9123f820797d058a2628172042"} Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.662254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f46248d38bf4272746ed9e47ef1633ddcd885a807d20b2a362d3064471605725"} Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.668292 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.668655 4833 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b" exitCode=1 Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.668722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b"} Jan 27 14:16:04 crc kubenswrapper[4833]: I0127 14:16:04.669241 4833 scope.go:117] "RemoveContainer" containerID="943dcec7a1a2ddd11c83c43a958057a1400f2536b99b64f28bb5c17ee3328b9b" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.047508 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.059824 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.683800 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c50be46959a3cd02a31871aa868e5fd7274006da63c212e61b10bbcdb0d7115a"} Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.684287 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.684639 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.684681 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.687726 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 14:16:05 crc kubenswrapper[4833]: I0127 14:16:05.687788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"24f32edaca3eaa4374035ad5f1c6e36a03af36a4eb8879558b13fdcf2cd33a9d"} Jan 27 14:16:08 crc kubenswrapper[4833]: I0127 14:16:08.225090 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:08 crc kubenswrapper[4833]: I0127 14:16:08.225547 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:08 crc kubenswrapper[4833]: I0127 14:16:08.232255 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:10 crc kubenswrapper[4833]: I0127 14:16:10.699515 4833 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:10 crc kubenswrapper[4833]: I0127 14:16:10.719276 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:10 crc kubenswrapper[4833]: I0127 14:16:10.719318 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:10 crc kubenswrapper[4833]: I0127 14:16:10.725941 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:10 crc kubenswrapper[4833]: I0127 14:16:10.729490 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6ae80825-c7a6-401d-b467-b311ed863cb3" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.202001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.202066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.202125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.202163 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.205965 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.205965 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.206503 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.214512 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.214661 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.225921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.230167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.238997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.347271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.359891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.369282 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.734734 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:11 crc kubenswrapper[4833]: I0127 14:16:11.735167 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:11 crc kubenswrapper[4833]: W0127 14:16:11.845517 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-289a9437769ef3759576bf5d7dfcd81fb6617b7c9e19a34bf61e48c91d4bb81a WatchSource:0}: Error finding container 289a9437769ef3759576bf5d7dfcd81fb6617b7c9e19a34bf61e48c91d4bb81a: Status 404 returned error can't find the container with id 289a9437769ef3759576bf5d7dfcd81fb6617b7c9e19a34bf61e48c91d4bb81a Jan 27 14:16:11 crc kubenswrapper[4833]: W0127 14:16:11.895827 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ceec9dd89055791ccc31eb79062b37f309a94b0d48771ee8fb5f3eb41c1d79c8 WatchSource:0}: Error finding container ceec9dd89055791ccc31eb79062b37f309a94b0d48771ee8fb5f3eb41c1d79c8: Status 404 returned error can't find the container with id ceec9dd89055791ccc31eb79062b37f309a94b0d48771ee8fb5f3eb41c1d79c8 Jan 27 14:16:11 crc kubenswrapper[4833]: W0127 14:16:11.904120 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0fae6bea3bb6fd8b7c416371dcd88300fd810cf26f2d5727643ae988d9c59c8b WatchSource:0}: Error finding container 0fae6bea3bb6fd8b7c416371dcd88300fd810cf26f2d5727643ae988d9c59c8b: Status 404 returned error can't find the container with id 0fae6bea3bb6fd8b7c416371dcd88300fd810cf26f2d5727643ae988d9c59c8b Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.741901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"573f06952d42a7b73d30b00f3cf79ff855a81d2d31a9178bc89cb0c2478f3980"} Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.742277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0fae6bea3bb6fd8b7c416371dcd88300fd810cf26f2d5727643ae988d9c59c8b"} Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.742511 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.743634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"53ecb2faa62506e5cda041f8dee171620ac9115fc65c44f800150670dab6ee33"} Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.743672 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ceec9dd89055791ccc31eb79062b37f309a94b0d48771ee8fb5f3eb41c1d79c8"} Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.745836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"35dccd43be099cd22771d887b44a8f25fc885db99134a6c38bcaadfeb71ea419"} Jan 27 14:16:12 crc kubenswrapper[4833]: I0127 14:16:12.745875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"289a9437769ef3759576bf5d7dfcd81fb6617b7c9e19a34bf61e48c91d4bb81a"} Jan 27 14:16:13 crc kubenswrapper[4833]: I0127 14:16:13.755557 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 27 14:16:13 crc kubenswrapper[4833]: I0127 14:16:13.755628 4833 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="53ecb2faa62506e5cda041f8dee171620ac9115fc65c44f800150670dab6ee33" exitCode=255 Jan 27 14:16:13 crc kubenswrapper[4833]: I0127 14:16:13.755732 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"53ecb2faa62506e5cda041f8dee171620ac9115fc65c44f800150670dab6ee33"} Jan 27 14:16:13 crc kubenswrapper[4833]: I0127 14:16:13.756164 4833 scope.go:117] "RemoveContainer" containerID="53ecb2faa62506e5cda041f8dee171620ac9115fc65c44f800150670dab6ee33" Jan 27 14:16:14 crc kubenswrapper[4833]: I0127 14:16:14.392936 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:16:14 crc kubenswrapper[4833]: I0127 14:16:14.764118 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 27 14:16:14 crc kubenswrapper[4833]: I0127 14:16:14.764199 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"64771393e6939fc733600dd2eba59f26938d5e0e3493a146a497b39bb00f2a52"} Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.059110 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.062733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.772813 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.773741 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.773852 4833 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="64771393e6939fc733600dd2eba59f26938d5e0e3493a146a497b39bb00f2a52" exitCode=255 Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.773940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"64771393e6939fc733600dd2eba59f26938d5e0e3493a146a497b39bb00f2a52"} Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.774015 4833 scope.go:117] "RemoveContainer" containerID="53ecb2faa62506e5cda041f8dee171620ac9115fc65c44f800150670dab6ee33" Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.775286 4833 scope.go:117] "RemoveContainer" containerID="64771393e6939fc733600dd2eba59f26938d5e0e3493a146a497b39bb00f2a52" Jan 27 14:16:15 crc kubenswrapper[4833]: E0127 14:16:15.775768 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:16:15 crc kubenswrapper[4833]: I0127 14:16:15.781022 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:16:16 crc kubenswrapper[4833]: I0127 14:16:16.784036 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 27 14:16:19 crc kubenswrapper[4833]: I0127 14:16:19.245654 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6ae80825-c7a6-401d-b467-b311ed863cb3" Jan 27 14:16:21 crc kubenswrapper[4833]: I0127 14:16:21.549732 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:16:21 crc kubenswrapper[4833]: I0127 14:16:21.567376 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 14:16:22 crc kubenswrapper[4833]: I0127 14:16:22.250342 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 14:16:22 crc kubenswrapper[4833]: I0127 14:16:22.842036 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 14:16:23 crc kubenswrapper[4833]: I0127 14:16:23.032728 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 14:16:23 crc kubenswrapper[4833]: I0127 14:16:23.136657 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 14:16:23 crc kubenswrapper[4833]: I0127 14:16:23.255420 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 14:16:23 crc kubenswrapper[4833]: I0127 14:16:23.515684 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 14:16:23 crc kubenswrapper[4833]: I0127 14:16:23.590499 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.010717 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.054532 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.136873 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.232301 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.370429 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.397316 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.416300 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.479069 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.746698 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 14:16:24 crc kubenswrapper[4833]: I0127 14:16:24.844878 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.309418 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.347067 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.526625 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.659537 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.683307 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.849063 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.862761 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.898416 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.924876 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.933162 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 14:16:25 crc kubenswrapper[4833]: I0127 14:16:25.965675 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.029077 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.110790 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.185494 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.206265 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.382787 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.417473 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.432811 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.526837 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.558799 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.682993 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.743568 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.840690 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 14:16:26 crc kubenswrapper[4833]: I0127 14:16:26.875628 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.110610 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.181888 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.199784 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.215810 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.228637 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.367391 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.367512 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.382564 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.439288 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.500966 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.509910 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.518847 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.566268 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.573647 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.573682 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.614344 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.686151 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.734939 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.764398 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.779754 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 14:16:27 crc kubenswrapper[4833]: I0127 14:16:27.869133 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.166591 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.172541 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.221815 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.247753 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.316249 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.410587 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.455336 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.455387 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.506853 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.572933 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.574931 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.597140 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.597175 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.661760 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.699233 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.748051 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.771164 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 14:16:28 crc kubenswrapper[4833]: I0127 14:16:28.994693 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.064992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.205022 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.216955 4833 scope.go:117] "RemoveContainer" containerID="64771393e6939fc733600dd2eba59f26938d5e0e3493a146a497b39bb00f2a52" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.290276 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.292247 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.295237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.438169 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.533960 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.560516 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.581435 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.615405 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.621284 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.675811 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.722268 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.727501 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.778757 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.797786 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.806037 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.810173 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.842845 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.843954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.865024 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.865082 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e4ba51816415884701e69b17f3e02590a1dbd4cb957d900346c3e8a4a9cd4548"} Jan 27 14:16:29 crc kubenswrapper[4833]: I0127 14:16:29.967417 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.012222 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.033767 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.034110 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.034653 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.063664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.092887 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.126207 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.155288 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.158832 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.199739 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.373890 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.427216 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.480614 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.519765 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.557659 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.559499 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.575931 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.682224 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.746836 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.793485 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.832409 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 14:16:30 crc kubenswrapper[4833]: I0127 14:16:30.999230 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.007709 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.036557 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.143954 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.174750 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.191374 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.231611 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.269141 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.291674 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.579630 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.721570 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.740439 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.799839 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.806177 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.859253 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:16:31 crc kubenswrapper[4833]: I0127 14:16:31.968936 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.039799 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.070662 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.073394 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.100282 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.107542 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.130927 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.168971 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.185341 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.239135 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.389846 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.394678 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.492826 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.658548 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.658681 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.674382 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.696604 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.718597 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.792430 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.843618 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.887470 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.930464 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 14:16:32 crc kubenswrapper[4833]: I0127 14:16:32.932607 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.076636 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.147256 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.163550 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.262897 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.264952 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.264934519 podStartE2EDuration="42.264934519s" podCreationTimestamp="2026-01-27 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:10.459624392 +0000 UTC m=+272.110948794" watchObservedRunningTime="2026-01-27 14:16:33.264934519 +0000 UTC m=+294.916258921" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.266392 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.267571 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-pbxl9"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.267646 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-58695898bb-frfpn"] Jan 27 14:16:33 crc kubenswrapper[4833]: E0127 14:16:33.267835 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" containerName="installer" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.267854 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" containerName="installer" Jan 27 14:16:33 crc kubenswrapper[4833]: E0127 14:16:33.267865 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810332c8-987e-485f-9940-d1b61944b1a8" containerName="oauth-openshift" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.267872 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="810332c8-987e-485f-9940-d1b61944b1a8" containerName="oauth-openshift" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268100 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268123 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e7ae050f-04d0-4da1-b503-82308f3481aa" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268138 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="810332c8-987e-485f-9940-d1b61944b1a8" containerName="oauth-openshift" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268153 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede7a1c3-853a-4add-9821-8229c9de4d04" containerName="installer" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268493 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst","openshift-controller-manager/controller-manager-6dd8668d58-wsqb8"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268638 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268694 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" podUID="b3a51624-cf21-46ec-9894-8b91af168053" containerName="controller-manager" containerID="cri-o://a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36" gracePeriod=30 Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.268782 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" podUID="7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" containerName="route-controller-manager" containerID="cri-o://5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6" gracePeriod=30 Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273153 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273478 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273588 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273688 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273781 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273869 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.273973 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.274063 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.276080 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.276269 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.276940 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.277686 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.282782 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.296637 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.298406 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.309712 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.311016 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.315186 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.315166518 podStartE2EDuration="23.315166518s" podCreationTimestamp="2026-01-27 14:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:33.312751507 +0000 UTC m=+294.964075909" watchObservedRunningTime="2026-01-27 14:16:33.315166518 +0000 UTC m=+294.966490920" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-error\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6296c3e2-65a9-4375-bb3f-a288931e9a96-audit-dir\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-session\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399483 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399515 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399551 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-router-certs\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399570 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399604 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-login\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399624 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-audit-policies\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399655 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-service-ca\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399718 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.399746 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsms4\" (UniqueName: \"kubernetes.io/projected/6296c3e2-65a9-4375-bb3f-a288931e9a96-kube-api-access-gsms4\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.401851 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.457720 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501411 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-router-certs\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501488 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501529 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-login\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501551 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-audit-policies\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501602 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-service-ca\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501673 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsms4\" (UniqueName: \"kubernetes.io/projected/6296c3e2-65a9-4375-bb3f-a288931e9a96-kube-api-access-gsms4\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.501979 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.502739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-audit-policies\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.505233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.505316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-error\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.505940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6296c3e2-65a9-4375-bb3f-a288931e9a96-audit-dir\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.505971 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-session\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.505995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.506046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.506578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6296c3e2-65a9-4375-bb3f-a288931e9a96-audit-dir\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.507062 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-service-ca\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.508414 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.508730 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-login\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.508774 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-router-certs\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.510024 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.510792 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-error\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.511359 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.511863 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.513918 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-system-session\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.514877 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6296c3e2-65a9-4375-bb3f-a288931e9a96-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.526802 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsms4\" (UniqueName: \"kubernetes.io/projected/6296c3e2-65a9-4375-bb3f-a288931e9a96-kube-api-access-gsms4\") pod \"oauth-openshift-58695898bb-frfpn\" (UID: \"6296c3e2-65a9-4375-bb3f-a288931e9a96\") " pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.553132 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.597646 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.629208 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.632846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.682735 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.704296 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.724364 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j"] Jan 27 14:16:33 crc kubenswrapper[4833]: E0127 14:16:33.724698 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" containerName="route-controller-manager" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.724711 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" containerName="route-controller-manager" Jan 27 14:16:33 crc kubenswrapper[4833]: E0127 14:16:33.724721 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a51624-cf21-46ec-9894-8b91af168053" containerName="controller-manager" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.724727 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a51624-cf21-46ec-9894-8b91af168053" containerName="controller-manager" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.724842 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" containerName="route-controller-manager" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.724856 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a51624-cf21-46ec-9894-8b91af168053" containerName="controller-manager" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.725245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.730869 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.771377 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.792813 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809622 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq5bh\" (UniqueName: \"kubernetes.io/projected/b3a51624-cf21-46ec-9894-8b91af168053-kube-api-access-kq5bh\") pod \"b3a51624-cf21-46ec-9894-8b91af168053\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809711 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a51624-cf21-46ec-9894-8b91af168053-serving-cert\") pod \"b3a51624-cf21-46ec-9894-8b91af168053\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809736 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-proxy-ca-bundles\") pod \"b3a51624-cf21-46ec-9894-8b91af168053\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809753 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-config\") pod \"b3a51624-cf21-46ec-9894-8b91af168053\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809776 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-client-ca\") pod \"b3a51624-cf21-46ec-9894-8b91af168053\" (UID: \"b3a51624-cf21-46ec-9894-8b91af168053\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-serving-cert\") pod \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-client-ca\") pod \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnnvg\" (UniqueName: \"kubernetes.io/projected/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-kube-api-access-fnnvg\") pod \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.809924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-config\") pod \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\" (UID: \"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c\") " Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.811118 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" (UID: "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.811168 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-config" (OuterVolumeSpecName: "config") pod "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" (UID: "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.811227 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-client-ca" (OuterVolumeSpecName: "client-ca") pod "b3a51624-cf21-46ec-9894-8b91af168053" (UID: "b3a51624-cf21-46ec-9894-8b91af168053"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.811278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b3a51624-cf21-46ec-9894-8b91af168053" (UID: "b3a51624-cf21-46ec-9894-8b91af168053"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.812972 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-config" (OuterVolumeSpecName: "config") pod "b3a51624-cf21-46ec-9894-8b91af168053" (UID: "b3a51624-cf21-46ec-9894-8b91af168053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.814646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" (UID: "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.814750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-kube-api-access-fnnvg" (OuterVolumeSpecName: "kube-api-access-fnnvg") pod "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" (UID: "7a901c93-41b5-4767-92b0-e5ac4f9b4a2c"). InnerVolumeSpecName "kube-api-access-fnnvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.814826 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a51624-cf21-46ec-9894-8b91af168053-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b3a51624-cf21-46ec-9894-8b91af168053" (UID: "b3a51624-cf21-46ec-9894-8b91af168053"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.815549 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a51624-cf21-46ec-9894-8b91af168053-kube-api-access-kq5bh" (OuterVolumeSpecName: "kube-api-access-kq5bh") pod "b3a51624-cf21-46ec-9894-8b91af168053" (UID: "b3a51624-cf21-46ec-9894-8b91af168053"). InnerVolumeSpecName "kube-api-access-kq5bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.872392 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.889128 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.892015 4833 generic.go:334] "Generic (PLEG): container finished" podID="b3a51624-cf21-46ec-9894-8b91af168053" containerID="a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36" exitCode=0 Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.892073 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" event={"ID":"b3a51624-cf21-46ec-9894-8b91af168053","Type":"ContainerDied","Data":"a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36"} Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.892101 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" event={"ID":"b3a51624-cf21-46ec-9894-8b91af168053","Type":"ContainerDied","Data":"61861eaf67def11002ad749143e6af83ce13ccee6690eae9c866997a56d66397"} Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.892098 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dd8668d58-wsqb8" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.892134 4833 scope.go:117] "RemoveContainer" containerID="a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.893642 4833 generic.go:334] "Generic (PLEG): container finished" podID="7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" containerID="5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6" exitCode=0 Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.893687 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" event={"ID":"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c","Type":"ContainerDied","Data":"5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6"} Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.893698 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.893719 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst" event={"ID":"7a901c93-41b5-4767-92b0-e5ac4f9b4a2c","Type":"ContainerDied","Data":"bbad0e344021534927e1d5d2fd9c31d82e04f3d68dae0c30229c0a736f4054c4"} Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.908745 4833 scope.go:117] "RemoveContainer" containerID="a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36" Jan 27 14:16:33 crc kubenswrapper[4833]: E0127 14:16:33.909174 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36\": container with ID starting with a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36 not found: ID does not exist" containerID="a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910086 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36"} err="failed to get container status \"a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36\": rpc error: code = NotFound desc = could not find container \"a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36\": container with ID starting with a28c5467a83301501d624fd37c22d735851f8a6ce6ea93154882d13709f3ee36 not found: ID does not exist" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910142 4833 scope.go:117] "RemoveContainer" containerID="5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910798 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527c426f-5990-40e5-a67b-8bdbd8779034-serving-cert\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910841 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-config\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910868 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-proxy-ca-bundles\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-client-ca\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910960 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rg9j\" (UniqueName: \"kubernetes.io/projected/527c426f-5990-40e5-a67b-8bdbd8779034-kube-api-access-8rg9j\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.910997 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq5bh\" (UniqueName: \"kubernetes.io/projected/b3a51624-cf21-46ec-9894-8b91af168053-kube-api-access-kq5bh\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911007 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3a51624-cf21-46ec-9894-8b91af168053-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911016 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911026 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911035 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3a51624-cf21-46ec-9894-8b91af168053-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911043 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911052 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnnvg\" (UniqueName: \"kubernetes.io/projected/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-kube-api-access-fnnvg\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911060 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.911070 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.914674 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.921538 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.925124 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57576445d7-mrsst"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.933094 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dd8668d58-wsqb8"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.935505 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dd8668d58-wsqb8"] Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.935898 4833 scope.go:117] "RemoveContainer" containerID="5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6" Jan 27 14:16:33 crc kubenswrapper[4833]: E0127 14:16:33.936656 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6\": container with ID starting with 5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6 not found: ID does not exist" containerID="5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.936689 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6"} err="failed to get container status \"5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6\": rpc error: code = NotFound desc = could not find container \"5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6\": container with ID starting with 5ccdad095fe75d9cd7cf57ed7667233ff1ca2a31184788745c91f2d464f645b6 not found: ID does not exist" Jan 27 14:16:33 crc kubenswrapper[4833]: I0127 14:16:33.989548 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.008257 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.011864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-config\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.011914 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-proxy-ca-bundles\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.011943 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-client-ca\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.012002 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rg9j\" (UniqueName: \"kubernetes.io/projected/527c426f-5990-40e5-a67b-8bdbd8779034-kube-api-access-8rg9j\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.012025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527c426f-5990-40e5-a67b-8bdbd8779034-serving-cert\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.012966 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-client-ca\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.013506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-config\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.013892 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-proxy-ca-bundles\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.014007 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58695898bb-frfpn"] Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.015591 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527c426f-5990-40e5-a67b-8bdbd8779034-serving-cert\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: W0127 14:16:34.016873 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6296c3e2_65a9_4375_bb3f_a288931e9a96.slice/crio-bc01c0bf03239536b64afc16426d3f3abb778b5a20a0b228fb1b28cab7d07059 WatchSource:0}: Error finding container bc01c0bf03239536b64afc16426d3f3abb778b5a20a0b228fb1b28cab7d07059: Status 404 returned error can't find the container with id bc01c0bf03239536b64afc16426d3f3abb778b5a20a0b228fb1b28cab7d07059 Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.027778 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rg9j\" (UniqueName: \"kubernetes.io/projected/527c426f-5990-40e5-a67b-8bdbd8779034-kube-api-access-8rg9j\") pod \"controller-manager-c49c4cc7b-bxh9j\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.043115 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.103914 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.219738 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.385304 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.412583 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.440504 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j"] Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.447981 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.458199 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.478746 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.564326 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.580167 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.582327 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.656153 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.697985 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.827963 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.837380 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.842941 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.903690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" event={"ID":"6296c3e2-65a9-4375-bb3f-a288931e9a96","Type":"ContainerStarted","Data":"d7f7f5f0b0481572fa7fd0b754c0bc91d044174b9a2ac89acf51732377d766cd"} Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.903767 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" event={"ID":"6296c3e2-65a9-4375-bb3f-a288931e9a96","Type":"ContainerStarted","Data":"bc01c0bf03239536b64afc16426d3f3abb778b5a20a0b228fb1b28cab7d07059"} Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.903963 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.906253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" event={"ID":"527c426f-5990-40e5-a67b-8bdbd8779034","Type":"ContainerStarted","Data":"6f2ce6a8a5e54c5f849e25139455ab800097d7a282caab5d7d54d5fa76c19f4b"} Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.906295 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" event={"ID":"527c426f-5990-40e5-a67b-8bdbd8779034","Type":"ContainerStarted","Data":"9059d19c435c6667cff37d1b9a7043224a4ea4f9a649326aba9557b51ba37e1e"} Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.906501 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.909290 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.910998 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.925840 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58695898bb-frfpn" podStartSLOduration=57.925821376 podStartE2EDuration="57.925821376s" podCreationTimestamp="2026-01-27 14:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:34.925537819 +0000 UTC m=+296.576862231" watchObservedRunningTime="2026-01-27 14:16:34.925821376 +0000 UTC m=+296.577145768" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.946558 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" podStartSLOduration=8.946539959 podStartE2EDuration="8.946539959s" podCreationTimestamp="2026-01-27 14:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:34.944798366 +0000 UTC m=+296.596122758" watchObservedRunningTime="2026-01-27 14:16:34.946539959 +0000 UTC m=+296.597864361" Jan 27 14:16:34 crc kubenswrapper[4833]: I0127 14:16:34.998221 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.057661 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.148367 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.169034 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.218383 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a901c93-41b5-4767-92b0-e5ac4f9b4a2c" path="/var/lib/kubelet/pods/7a901c93-41b5-4767-92b0-e5ac4f9b4a2c/volumes" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.219235 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810332c8-987e-485f-9940-d1b61944b1a8" path="/var/lib/kubelet/pods/810332c8-987e-485f-9940-d1b61944b1a8/volumes" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.219954 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a51624-cf21-46ec-9894-8b91af168053" path="/var/lib/kubelet/pods/b3a51624-cf21-46ec-9894-8b91af168053/volumes" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.330557 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.438873 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.448136 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.534537 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.651103 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.654942 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.784027 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.796740 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.821187 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.873621 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.910505 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 14:16:35 crc kubenswrapper[4833]: I0127 14:16:35.919698 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.014478 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.078828 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.193478 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.437160 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.458916 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.585594 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n"] Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.586611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.588605 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.588771 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.589015 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.589123 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.589218 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.593237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.594518 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n"] Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.601084 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.616246 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.746584 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-client-ca\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.746627 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-config\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.746654 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-serving-cert\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.746677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtcwq\" (UniqueName: \"kubernetes.io/projected/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-kube-api-access-gtcwq\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.817165 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.848184 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-client-ca\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.848257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-config\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.848295 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-serving-cert\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.848328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtcwq\" (UniqueName: \"kubernetes.io/projected/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-kube-api-access-gtcwq\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.849811 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-client-ca\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.849936 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-config\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.856789 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-serving-cert\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.864139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtcwq\" (UniqueName: \"kubernetes.io/projected/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-kube-api-access-gtcwq\") pod \"route-controller-manager-697ddf4585-nhm4n\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:36 crc kubenswrapper[4833]: I0127 14:16:36.924215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.130440 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.285165 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.320208 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n"] Jan 27 14:16:37 crc kubenswrapper[4833]: W0127 14:16:37.326286 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb06c346_e8ed_4868_8ab7_c3a3b7b41565.slice/crio-02b64bab354630ba459a9069dd01983931db4f5464b55b1aaba67d995bc5f7c3 WatchSource:0}: Error finding container 02b64bab354630ba459a9069dd01983931db4f5464b55b1aaba67d995bc5f7c3: Status 404 returned error can't find the container with id 02b64bab354630ba459a9069dd01983931db4f5464b55b1aaba67d995bc5f7c3 Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.396987 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.572848 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.595043 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.765900 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.931066 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" event={"ID":"eb06c346-e8ed-4868-8ab7-c3a3b7b41565","Type":"ContainerStarted","Data":"e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc"} Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.931132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" event={"ID":"eb06c346-e8ed-4868-8ab7-c3a3b7b41565","Type":"ContainerStarted","Data":"02b64bab354630ba459a9069dd01983931db4f5464b55b1aaba67d995bc5f7c3"} Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.931300 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.942975 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:37 crc kubenswrapper[4833]: I0127 14:16:37.954076 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" podStartSLOduration=11.954059023 podStartE2EDuration="11.954059023s" podCreationTimestamp="2026-01-27 14:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:37.951908228 +0000 UTC m=+299.603232640" watchObservedRunningTime="2026-01-27 14:16:37.954059023 +0000 UTC m=+299.605383425" Jan 27 14:16:38 crc kubenswrapper[4833]: I0127 14:16:38.117665 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 14:16:38 crc kubenswrapper[4833]: I0127 14:16:38.193798 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 14:16:38 crc kubenswrapper[4833]: I0127 14:16:38.276333 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 14:16:38 crc kubenswrapper[4833]: I0127 14:16:38.465312 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 14:16:38 crc kubenswrapper[4833]: I0127 14:16:38.983510 4833 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 14:16:39 crc kubenswrapper[4833]: I0127 14:16:39.326354 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 14:16:39 crc kubenswrapper[4833]: I0127 14:16:39.641747 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 14:16:40 crc kubenswrapper[4833]: I0127 14:16:40.057236 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 14:16:40 crc kubenswrapper[4833]: I0127 14:16:40.995883 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.332579 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j"] Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.333151 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" podUID="527c426f-5990-40e5-a67b-8bdbd8779034" containerName="controller-manager" containerID="cri-o://6f2ce6a8a5e54c5f849e25139455ab800097d7a282caab5d7d54d5fa76c19f4b" gracePeriod=30 Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.361270 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n"] Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.361729 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" podUID="eb06c346-e8ed-4868-8ab7-c3a3b7b41565" containerName="route-controller-manager" containerID="cri-o://e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc" gracePeriod=30 Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.493435 4833 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.493706 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://67ce8cf455c81a74eb4db83800d9474a9ba7c01ef0092ca876001734a0a046ae" gracePeriod=5 Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.898629 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.976054 4833 generic.go:334] "Generic (PLEG): container finished" podID="eb06c346-e8ed-4868-8ab7-c3a3b7b41565" containerID="e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc" exitCode=0 Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.976199 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" event={"ID":"eb06c346-e8ed-4868-8ab7-c3a3b7b41565","Type":"ContainerDied","Data":"e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc"} Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.976917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" event={"ID":"eb06c346-e8ed-4868-8ab7-c3a3b7b41565","Type":"ContainerDied","Data":"02b64bab354630ba459a9069dd01983931db4f5464b55b1aaba67d995bc5f7c3"} Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.976948 4833 scope.go:117] "RemoveContainer" containerID="e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc" Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.976232 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n" Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.979681 4833 generic.go:334] "Generic (PLEG): container finished" podID="527c426f-5990-40e5-a67b-8bdbd8779034" containerID="6f2ce6a8a5e54c5f849e25139455ab800097d7a282caab5d7d54d5fa76c19f4b" exitCode=0 Jan 27 14:16:44 crc kubenswrapper[4833]: I0127 14:16:44.979745 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" event={"ID":"527c426f-5990-40e5-a67b-8bdbd8779034","Type":"ContainerDied","Data":"6f2ce6a8a5e54c5f849e25139455ab800097d7a282caab5d7d54d5fa76c19f4b"} Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.002234 4833 scope.go:117] "RemoveContainer" containerID="e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc" Jan 27 14:16:45 crc kubenswrapper[4833]: E0127 14:16:45.002907 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc\": container with ID starting with e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc not found: ID does not exist" containerID="e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.005415 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc"} err="failed to get container status \"e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc\": rpc error: code = NotFound desc = could not find container \"e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc\": container with ID starting with e31bf2f960215065858fddab76c2cac57b4a239afb1c06c83cfcc94fc39519dc not found: ID does not exist" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.039796 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.070381 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-serving-cert\") pod \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.070671 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtcwq\" (UniqueName: \"kubernetes.io/projected/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-kube-api-access-gtcwq\") pod \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.070782 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-client-ca\") pod \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.070861 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-config\") pod \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\" (UID: \"eb06c346-e8ed-4868-8ab7-c3a3b7b41565\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.071282 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb06c346-e8ed-4868-8ab7-c3a3b7b41565" (UID: "eb06c346-e8ed-4868-8ab7-c3a3b7b41565"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.071775 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-config" (OuterVolumeSpecName: "config") pod "eb06c346-e8ed-4868-8ab7-c3a3b7b41565" (UID: "eb06c346-e8ed-4868-8ab7-c3a3b7b41565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.076852 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb06c346-e8ed-4868-8ab7-c3a3b7b41565" (UID: "eb06c346-e8ed-4868-8ab7-c3a3b7b41565"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.077306 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-kube-api-access-gtcwq" (OuterVolumeSpecName: "kube-api-access-gtcwq") pod "eb06c346-e8ed-4868-8ab7-c3a3b7b41565" (UID: "eb06c346-e8ed-4868-8ab7-c3a3b7b41565"). InnerVolumeSpecName "kube-api-access-gtcwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.172565 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-proxy-ca-bundles\") pod \"527c426f-5990-40e5-a67b-8bdbd8779034\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.172624 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-config\") pod \"527c426f-5990-40e5-a67b-8bdbd8779034\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.172687 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rg9j\" (UniqueName: \"kubernetes.io/projected/527c426f-5990-40e5-a67b-8bdbd8779034-kube-api-access-8rg9j\") pod \"527c426f-5990-40e5-a67b-8bdbd8779034\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.173567 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-config" (OuterVolumeSpecName: "config") pod "527c426f-5990-40e5-a67b-8bdbd8779034" (UID: "527c426f-5990-40e5-a67b-8bdbd8779034"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.173640 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "527c426f-5990-40e5-a67b-8bdbd8779034" (UID: "527c426f-5990-40e5-a67b-8bdbd8779034"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.173974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527c426f-5990-40e5-a67b-8bdbd8779034-serving-cert\") pod \"527c426f-5990-40e5-a67b-8bdbd8779034\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174142 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-client-ca\") pod \"527c426f-5990-40e5-a67b-8bdbd8779034\" (UID: \"527c426f-5990-40e5-a67b-8bdbd8779034\") " Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174507 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-client-ca" (OuterVolumeSpecName: "client-ca") pod "527c426f-5990-40e5-a67b-8bdbd8779034" (UID: "527c426f-5990-40e5-a67b-8bdbd8779034"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174568 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174667 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtcwq\" (UniqueName: \"kubernetes.io/projected/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-kube-api-access-gtcwq\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174735 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174798 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb06c346-e8ed-4868-8ab7-c3a3b7b41565-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174863 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.174936 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.175887 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527c426f-5990-40e5-a67b-8bdbd8779034-kube-api-access-8rg9j" (OuterVolumeSpecName: "kube-api-access-8rg9j") pod "527c426f-5990-40e5-a67b-8bdbd8779034" (UID: "527c426f-5990-40e5-a67b-8bdbd8779034"). InnerVolumeSpecName "kube-api-access-8rg9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.176345 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527c426f-5990-40e5-a67b-8bdbd8779034-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "527c426f-5990-40e5-a67b-8bdbd8779034" (UID: "527c426f-5990-40e5-a67b-8bdbd8779034"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.276277 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527c426f-5990-40e5-a67b-8bdbd8779034-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.276330 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rg9j\" (UniqueName: \"kubernetes.io/projected/527c426f-5990-40e5-a67b-8bdbd8779034-kube-api-access-8rg9j\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.276352 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527c426f-5990-40e5-a67b-8bdbd8779034-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.302715 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n"] Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.307073 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-nhm4n"] Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.586486 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h"] Jan 27 14:16:45 crc kubenswrapper[4833]: E0127 14:16:45.586819 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.586835 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:16:45 crc kubenswrapper[4833]: E0127 14:16:45.586845 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527c426f-5990-40e5-a67b-8bdbd8779034" containerName="controller-manager" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.586854 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="527c426f-5990-40e5-a67b-8bdbd8779034" containerName="controller-manager" Jan 27 14:16:45 crc kubenswrapper[4833]: E0127 14:16:45.586874 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb06c346-e8ed-4868-8ab7-c3a3b7b41565" containerName="route-controller-manager" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.586881 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb06c346-e8ed-4868-8ab7-c3a3b7b41565" containerName="route-controller-manager" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.586997 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb06c346-e8ed-4868-8ab7-c3a3b7b41565" containerName="route-controller-manager" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.587011 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="527c426f-5990-40e5-a67b-8bdbd8779034" containerName="controller-manager" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.587028 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.587508 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.600798 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk"] Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.601618 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.605025 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.605044 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.605336 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.605375 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.605787 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.606487 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.616615 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk"] Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.616959 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h"] Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.686130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0710d1c-1b4e-4263-9100-d4bd01566d55-serving-cert\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.686185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-client-ca\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.686216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-proxy-ca-bundles\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.686252 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hl4\" (UniqueName: \"kubernetes.io/projected/a0710d1c-1b4e-4263-9100-d4bd01566d55-kube-api-access-f9hl4\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.686282 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-config\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.788914 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-client-ca\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789016 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-proxy-ca-bundles\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789615 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-client-ca\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hl4\" (UniqueName: \"kubernetes.io/projected/a0710d1c-1b4e-4263-9100-d4bd01566d55-kube-api-access-f9hl4\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-config\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789739 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c82c66b6-4e31-4b91-b053-a258e5ef31cd-serving-cert\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-config\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789798 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtrr\" (UniqueName: \"kubernetes.io/projected/c82c66b6-4e31-4b91-b053-a258e5ef31cd-kube-api-access-2dtrr\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.789828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0710d1c-1b4e-4263-9100-d4bd01566d55-serving-cert\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.792604 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-client-ca\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.792691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-proxy-ca-bundles\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.794584 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-config\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.796273 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0710d1c-1b4e-4263-9100-d4bd01566d55-serving-cert\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.822859 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hl4\" (UniqueName: \"kubernetes.io/projected/a0710d1c-1b4e-4263-9100-d4bd01566d55-kube-api-access-f9hl4\") pod \"controller-manager-5ff6ccf94b-jx97h\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.891045 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-client-ca\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.891127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-config\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.891147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c82c66b6-4e31-4b91-b053-a258e5ef31cd-serving-cert\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.891186 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtrr\" (UniqueName: \"kubernetes.io/projected/c82c66b6-4e31-4b91-b053-a258e5ef31cd-kube-api-access-2dtrr\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.892684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-client-ca\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.892728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-config\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.894736 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c82c66b6-4e31-4b91-b053-a258e5ef31cd-serving-cert\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.909506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtrr\" (UniqueName: \"kubernetes.io/projected/c82c66b6-4e31-4b91-b053-a258e5ef31cd-kube-api-access-2dtrr\") pod \"route-controller-manager-dfff9545c-frbtk\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.927066 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.935583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.988628 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.988625 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j" event={"ID":"527c426f-5990-40e5-a67b-8bdbd8779034","Type":"ContainerDied","Data":"9059d19c435c6667cff37d1b9a7043224a4ea4f9a649326aba9557b51ba37e1e"} Jan 27 14:16:45 crc kubenswrapper[4833]: I0127 14:16:45.988752 4833 scope.go:117] "RemoveContainer" containerID="6f2ce6a8a5e54c5f849e25139455ab800097d7a282caab5d7d54d5fa76c19f4b" Jan 27 14:16:46 crc kubenswrapper[4833]: I0127 14:16:46.014037 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j"] Jan 27 14:16:46 crc kubenswrapper[4833]: I0127 14:16:46.023535 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-bxh9j"] Jan 27 14:16:46 crc kubenswrapper[4833]: I0127 14:16:46.411363 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk"] Jan 27 14:16:46 crc kubenswrapper[4833]: W0127 14:16:46.419236 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82c66b6_4e31_4b91_b053_a258e5ef31cd.slice/crio-be47a2fe5eb61fb2ea0bb02112014b9ed41350a6ba46d87f8b13ad4bb4d75892 WatchSource:0}: Error finding container be47a2fe5eb61fb2ea0bb02112014b9ed41350a6ba46d87f8b13ad4bb4d75892: Status 404 returned error can't find the container with id be47a2fe5eb61fb2ea0bb02112014b9ed41350a6ba46d87f8b13ad4bb4d75892 Jan 27 14:16:46 crc kubenswrapper[4833]: I0127 14:16:46.453052 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h"] Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.001294 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" event={"ID":"c82c66b6-4e31-4b91-b053-a258e5ef31cd","Type":"ContainerStarted","Data":"833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa"} Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.002046 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.002068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" event={"ID":"c82c66b6-4e31-4b91-b053-a258e5ef31cd","Type":"ContainerStarted","Data":"be47a2fe5eb61fb2ea0bb02112014b9ed41350a6ba46d87f8b13ad4bb4d75892"} Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.002907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" event={"ID":"a0710d1c-1b4e-4263-9100-d4bd01566d55","Type":"ContainerStarted","Data":"856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992"} Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.002935 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" event={"ID":"a0710d1c-1b4e-4263-9100-d4bd01566d55","Type":"ContainerStarted","Data":"136e436ae1cad2f7f5a67fb6bcca52f83bfbb43a0110e2947e5e1b8e0da0e641"} Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.003130 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.008105 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.011636 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.036933 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" podStartSLOduration=3.036911063 podStartE2EDuration="3.036911063s" podCreationTimestamp="2026-01-27 14:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:47.034713107 +0000 UTC m=+308.686037509" watchObservedRunningTime="2026-01-27 14:16:47.036911063 +0000 UTC m=+308.688235475" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.055372 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" podStartSLOduration=3.055357629 podStartE2EDuration="3.055357629s" podCreationTimestamp="2026-01-27 14:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:16:47.052396714 +0000 UTC m=+308.703721116" watchObservedRunningTime="2026-01-27 14:16:47.055357629 +0000 UTC m=+308.706682031" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.237545 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527c426f-5990-40e5-a67b-8bdbd8779034" path="/var/lib/kubelet/pods/527c426f-5990-40e5-a67b-8bdbd8779034/volumes" Jan 27 14:16:47 crc kubenswrapper[4833]: I0127 14:16:47.238162 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb06c346-e8ed-4868-8ab7-c3a3b7b41565" path="/var/lib/kubelet/pods/eb06c346-e8ed-4868-8ab7-c3a3b7b41565/volumes" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.022233 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.022544 4833 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="67ce8cf455c81a74eb4db83800d9474a9ba7c01ef0092ca876001734a0a046ae" exitCode=137 Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.064353 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.064420 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.260803 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.260879 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.260889 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.260931 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.260953 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261010 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261036 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261184 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261275 4833 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261293 4833 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261303 4833 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.261313 4833 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.268320 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:16:50 crc kubenswrapper[4833]: I0127 14:16:50.363103 4833 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.029689 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.029766 4833 scope.go:117] "RemoveContainer" containerID="67ce8cf455c81a74eb4db83800d9474a9ba7c01ef0092ca876001734a0a046ae" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.029838 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.223033 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.223685 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.235160 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.235209 4833 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="63171682-9d76-493f-af51-44c7b532975b" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.240063 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.240265 4833 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="63171682-9d76-493f-af51-44c7b532975b" Jan 27 14:16:51 crc kubenswrapper[4833]: I0127 14:16:51.374252 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.923542 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7tkrx"] Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.926258 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.936890 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7tkrx"] Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983032 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-registry-certificates\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-registry-tls\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983214 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-bound-sa-token\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983267 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983288 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-trusted-ca\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:23 crc kubenswrapper[4833]: I0127 14:17:23.983310 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275l5\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-kube-api-access-275l5\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.007542 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084510 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084580 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-registry-tls\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-bound-sa-token\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084632 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-trusted-ca\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084682 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275l5\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-kube-api-access-275l5\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.084711 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-registry-certificates\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.085109 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.086214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-trusted-ca\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.086261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-registry-certificates\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.092507 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-registry-tls\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.092535 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.101573 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-bound-sa-token\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.106070 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275l5\" (UniqueName: \"kubernetes.io/projected/8ce45d4d-66e4-48ac-a8b6-d67f55ea4263-kube-api-access-275l5\") pod \"image-registry-66df7c8f76-7tkrx\" (UID: \"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263\") " pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.250667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.337986 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h"] Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.338398 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" podUID="a0710d1c-1b4e-4263-9100-d4bd01566d55" containerName="controller-manager" containerID="cri-o://856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992" gracePeriod=30 Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.375944 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk"] Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.376129 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" podUID="c82c66b6-4e31-4b91-b053-a258e5ef31cd" containerName="route-controller-manager" containerID="cri-o://833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa" gracePeriod=30 Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.747302 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7tkrx"] Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.780235 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.805646 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0710d1c-1b4e-4263-9100-d4bd01566d55-serving-cert\") pod \"a0710d1c-1b4e-4263-9100-d4bd01566d55\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.805715 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-proxy-ca-bundles\") pod \"a0710d1c-1b4e-4263-9100-d4bd01566d55\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.805767 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9hl4\" (UniqueName: \"kubernetes.io/projected/a0710d1c-1b4e-4263-9100-d4bd01566d55-kube-api-access-f9hl4\") pod \"a0710d1c-1b4e-4263-9100-d4bd01566d55\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.805799 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-config\") pod \"a0710d1c-1b4e-4263-9100-d4bd01566d55\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.805864 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-client-ca\") pod \"a0710d1c-1b4e-4263-9100-d4bd01566d55\" (UID: \"a0710d1c-1b4e-4263-9100-d4bd01566d55\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.806625 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0710d1c-1b4e-4263-9100-d4bd01566d55" (UID: "a0710d1c-1b4e-4263-9100-d4bd01566d55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.806736 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-config" (OuterVolumeSpecName: "config") pod "a0710d1c-1b4e-4263-9100-d4bd01566d55" (UID: "a0710d1c-1b4e-4263-9100-d4bd01566d55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.810837 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.810893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0710d1c-1b4e-4263-9100-d4bd01566d55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0710d1c-1b4e-4263-9100-d4bd01566d55" (UID: "a0710d1c-1b4e-4263-9100-d4bd01566d55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.811962 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0710d1c-1b4e-4263-9100-d4bd01566d55-kube-api-access-f9hl4" (OuterVolumeSpecName: "kube-api-access-f9hl4") pod "a0710d1c-1b4e-4263-9100-d4bd01566d55" (UID: "a0710d1c-1b4e-4263-9100-d4bd01566d55"). InnerVolumeSpecName "kube-api-access-f9hl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.812429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0710d1c-1b4e-4263-9100-d4bd01566d55" (UID: "a0710d1c-1b4e-4263-9100-d4bd01566d55"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.906865 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c82c66b6-4e31-4b91-b053-a258e5ef31cd-serving-cert\") pod \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.906916 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dtrr\" (UniqueName: \"kubernetes.io/projected/c82c66b6-4e31-4b91-b053-a258e5ef31cd-kube-api-access-2dtrr\") pod \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907037 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-client-ca\") pod \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907061 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-config\") pod \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\" (UID: \"c82c66b6-4e31-4b91-b053-a258e5ef31cd\") " Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907297 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0710d1c-1b4e-4263-9100-d4bd01566d55-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907314 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907327 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9hl4\" (UniqueName: \"kubernetes.io/projected/a0710d1c-1b4e-4263-9100-d4bd01566d55-kube-api-access-f9hl4\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907337 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.907347 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0710d1c-1b4e-4263-9100-d4bd01566d55-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.908375 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-client-ca" (OuterVolumeSpecName: "client-ca") pod "c82c66b6-4e31-4b91-b053-a258e5ef31cd" (UID: "c82c66b6-4e31-4b91-b053-a258e5ef31cd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.908603 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-config" (OuterVolumeSpecName: "config") pod "c82c66b6-4e31-4b91-b053-a258e5ef31cd" (UID: "c82c66b6-4e31-4b91-b053-a258e5ef31cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.911226 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82c66b6-4e31-4b91-b053-a258e5ef31cd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c82c66b6-4e31-4b91-b053-a258e5ef31cd" (UID: "c82c66b6-4e31-4b91-b053-a258e5ef31cd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:17:24 crc kubenswrapper[4833]: I0127 14:17:24.911233 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82c66b6-4e31-4b91-b053-a258e5ef31cd-kube-api-access-2dtrr" (OuterVolumeSpecName: "kube-api-access-2dtrr") pod "c82c66b6-4e31-4b91-b053-a258e5ef31cd" (UID: "c82c66b6-4e31-4b91-b053-a258e5ef31cd"). InnerVolumeSpecName "kube-api-access-2dtrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.008972 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.009257 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82c66b6-4e31-4b91-b053-a258e5ef31cd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.009266 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c82c66b6-4e31-4b91-b053-a258e5ef31cd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.009278 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dtrr\" (UniqueName: \"kubernetes.io/projected/c82c66b6-4e31-4b91-b053-a258e5ef31cd-kube-api-access-2dtrr\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.243134 4833 generic.go:334] "Generic (PLEG): container finished" podID="c82c66b6-4e31-4b91-b053-a258e5ef31cd" containerID="833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa" exitCode=0 Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.243169 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.243197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" event={"ID":"c82c66b6-4e31-4b91-b053-a258e5ef31cd","Type":"ContainerDied","Data":"833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa"} Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.243220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk" event={"ID":"c82c66b6-4e31-4b91-b053-a258e5ef31cd","Type":"ContainerDied","Data":"be47a2fe5eb61fb2ea0bb02112014b9ed41350a6ba46d87f8b13ad4bb4d75892"} Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.243236 4833 scope.go:117] "RemoveContainer" containerID="833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.244937 4833 generic.go:334] "Generic (PLEG): container finished" podID="a0710d1c-1b4e-4263-9100-d4bd01566d55" containerID="856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992" exitCode=0 Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.245034 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.245376 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" event={"ID":"a0710d1c-1b4e-4263-9100-d4bd01566d55","Type":"ContainerDied","Data":"856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992"} Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.245439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h" event={"ID":"a0710d1c-1b4e-4263-9100-d4bd01566d55","Type":"ContainerDied","Data":"136e436ae1cad2f7f5a67fb6bcca52f83bfbb43a0110e2947e5e1b8e0da0e641"} Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.248163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" event={"ID":"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263","Type":"ContainerStarted","Data":"d146d4fb0987c089d0c86616e93df47a43be9c3e78dc2c73bcff82ffa57a933f"} Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.248197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" event={"ID":"8ce45d4d-66e4-48ac-a8b6-d67f55ea4263","Type":"ContainerStarted","Data":"7a1bb3af173028a4d67f54e5b73bac8d6169b690769ecd3a713a445ff2bf592d"} Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.248326 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.264778 4833 scope.go:117] "RemoveContainer" containerID="833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa" Jan 27 14:17:25 crc kubenswrapper[4833]: E0127 14:17:25.265487 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa\": container with ID starting with 833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa not found: ID does not exist" containerID="833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.265553 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa"} err="failed to get container status \"833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa\": rpc error: code = NotFound desc = could not find container \"833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa\": container with ID starting with 833e34603cbff782144ce4ee6b2515c22a1d101d127adac218f242104c1341aa not found: ID does not exist" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.265604 4833 scope.go:117] "RemoveContainer" containerID="856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.274010 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" podStartSLOduration=2.273984435 podStartE2EDuration="2.273984435s" podCreationTimestamp="2026-01-27 14:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:17:25.267903265 +0000 UTC m=+346.919227687" watchObservedRunningTime="2026-01-27 14:17:25.273984435 +0000 UTC m=+346.925308837" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.280723 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.286025 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5ff6ccf94b-jx97h"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.288265 4833 scope.go:117] "RemoveContainer" containerID="856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992" Jan 27 14:17:25 crc kubenswrapper[4833]: E0127 14:17:25.288927 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992\": container with ID starting with 856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992 not found: ID does not exist" containerID="856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.288964 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992"} err="failed to get container status \"856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992\": rpc error: code = NotFound desc = could not find container \"856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992\": container with ID starting with 856d5c59d9d80d8bd5acc948e98e496389cd7d35b2400061b70fff9e96cb1992 not found: ID does not exist" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.297865 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.302933 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dfff9545c-frbtk"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.611836 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-njqf7"] Jan 27 14:17:25 crc kubenswrapper[4833]: E0127 14:17:25.612095 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c82c66b6-4e31-4b91-b053-a258e5ef31cd" containerName="route-controller-manager" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.612108 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c82c66b6-4e31-4b91-b053-a258e5ef31cd" containerName="route-controller-manager" Jan 27 14:17:25 crc kubenswrapper[4833]: E0127 14:17:25.612118 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0710d1c-1b4e-4263-9100-d4bd01566d55" containerName="controller-manager" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.612124 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0710d1c-1b4e-4263-9100-d4bd01566d55" containerName="controller-manager" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.612228 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c82c66b6-4e31-4b91-b053-a258e5ef31cd" containerName="route-controller-manager" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.612239 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0710d1c-1b4e-4263-9100-d4bd01566d55" containerName="controller-manager" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.612597 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.615246 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.616142 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.616267 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.616352 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.616655 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.617430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.618088 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.618527 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.619241 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.619527 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.619572 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.620423 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.621060 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.627745 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.628817 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-njqf7"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.630470 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.649049 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz"] Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719081 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8874\" (UniqueName: \"kubernetes.io/projected/ebbf253a-7612-47bc-b5da-d27d2e62701c-kube-api-access-b8874\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719131 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5h5\" (UniqueName: \"kubernetes.io/projected/0e49b4d1-bb1a-4b42-b821-0992add73631-kube-api-access-hz5h5\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719170 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-client-ca\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebbf253a-7612-47bc-b5da-d27d2e62701c-serving-cert\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719311 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-proxy-ca-bundles\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-config\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719506 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e49b4d1-bb1a-4b42-b821-0992add73631-serving-cert\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719529 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e49b4d1-bb1a-4b42-b821-0992add73631-client-ca\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.719635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e49b4d1-bb1a-4b42-b821-0992add73631-config\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.820998 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5h5\" (UniqueName: \"kubernetes.io/projected/0e49b4d1-bb1a-4b42-b821-0992add73631-kube-api-access-hz5h5\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821104 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-client-ca\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebbf253a-7612-47bc-b5da-d27d2e62701c-serving-cert\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-proxy-ca-bundles\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821234 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-config\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e49b4d1-bb1a-4b42-b821-0992add73631-serving-cert\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e49b4d1-bb1a-4b42-b821-0992add73631-client-ca\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e49b4d1-bb1a-4b42-b821-0992add73631-config\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.821415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8874\" (UniqueName: \"kubernetes.io/projected/ebbf253a-7612-47bc-b5da-d27d2e62701c-kube-api-access-b8874\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.822950 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e49b4d1-bb1a-4b42-b821-0992add73631-client-ca\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.823085 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-proxy-ca-bundles\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.823485 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e49b4d1-bb1a-4b42-b821-0992add73631-config\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.823633 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-client-ca\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.826069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbf253a-7612-47bc-b5da-d27d2e62701c-config\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.826902 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebbf253a-7612-47bc-b5da-d27d2e62701c-serving-cert\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.827479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e49b4d1-bb1a-4b42-b821-0992add73631-serving-cert\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.839049 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5h5\" (UniqueName: \"kubernetes.io/projected/0e49b4d1-bb1a-4b42-b821-0992add73631-kube-api-access-hz5h5\") pod \"route-controller-manager-697ddf4585-gldhz\" (UID: \"0e49b4d1-bb1a-4b42-b821-0992add73631\") " pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.839683 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8874\" (UniqueName: \"kubernetes.io/projected/ebbf253a-7612-47bc-b5da-d27d2e62701c-kube-api-access-b8874\") pod \"controller-manager-c49c4cc7b-njqf7\" (UID: \"ebbf253a-7612-47bc-b5da-d27d2e62701c\") " pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.943767 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:25 crc kubenswrapper[4833]: I0127 14:17:25.962416 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:26 crc kubenswrapper[4833]: I0127 14:17:26.343291 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c49c4cc7b-njqf7"] Jan 27 14:17:26 crc kubenswrapper[4833]: W0127 14:17:26.353019 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbf253a_7612_47bc_b5da_d27d2e62701c.slice/crio-fd19b6962f1e30b4977ef384402ba06b37edb2b5110bcb423956a3bb6af4d20d WatchSource:0}: Error finding container fd19b6962f1e30b4977ef384402ba06b37edb2b5110bcb423956a3bb6af4d20d: Status 404 returned error can't find the container with id fd19b6962f1e30b4977ef384402ba06b37edb2b5110bcb423956a3bb6af4d20d Jan 27 14:17:26 crc kubenswrapper[4833]: I0127 14:17:26.389753 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz"] Jan 27 14:17:26 crc kubenswrapper[4833]: W0127 14:17:26.394438 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e49b4d1_bb1a_4b42_b821_0992add73631.slice/crio-c0bc5a5d22c1fed62cbc34a7cf07840d0577d9b3915517c123f70e84d5c95bf1 WatchSource:0}: Error finding container c0bc5a5d22c1fed62cbc34a7cf07840d0577d9b3915517c123f70e84d5c95bf1: Status 404 returned error can't find the container with id c0bc5a5d22c1fed62cbc34a7cf07840d0577d9b3915517c123f70e84d5c95bf1 Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.217987 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0710d1c-1b4e-4263-9100-d4bd01566d55" path="/var/lib/kubelet/pods/a0710d1c-1b4e-4263-9100-d4bd01566d55/volumes" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.218768 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82c66b6-4e31-4b91-b053-a258e5ef31cd" path="/var/lib/kubelet/pods/c82c66b6-4e31-4b91-b053-a258e5ef31cd/volumes" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.265259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" event={"ID":"0e49b4d1-bb1a-4b42-b821-0992add73631","Type":"ContainerStarted","Data":"64302f514f303cb2f39eaa41c86997fe5265b6a5f8079980dd0214b185ead3a7"} Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.265320 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" event={"ID":"0e49b4d1-bb1a-4b42-b821-0992add73631","Type":"ContainerStarted","Data":"c0bc5a5d22c1fed62cbc34a7cf07840d0577d9b3915517c123f70e84d5c95bf1"} Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.265620 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.267711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" event={"ID":"ebbf253a-7612-47bc-b5da-d27d2e62701c","Type":"ContainerStarted","Data":"32f976841e2242dca2cf3e9a7d9492c0421431b6e8d0649ff11d688a60406651"} Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.268120 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.268132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" event={"ID":"ebbf253a-7612-47bc-b5da-d27d2e62701c","Type":"ContainerStarted","Data":"fd19b6962f1e30b4977ef384402ba06b37edb2b5110bcb423956a3bb6af4d20d"} Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.271294 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.273882 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.284089 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-697ddf4585-gldhz" podStartSLOduration=3.28407102 podStartE2EDuration="3.28407102s" podCreationTimestamp="2026-01-27 14:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:17:27.281573898 +0000 UTC m=+348.932898320" watchObservedRunningTime="2026-01-27 14:17:27.28407102 +0000 UTC m=+348.935395442" Jan 27 14:17:27 crc kubenswrapper[4833]: I0127 14:17:27.355107 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c49c4cc7b-njqf7" podStartSLOduration=3.355091032 podStartE2EDuration="3.355091032s" podCreationTimestamp="2026-01-27 14:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:17:27.350138718 +0000 UTC m=+349.001463120" watchObservedRunningTime="2026-01-27 14:17:27.355091032 +0000 UTC m=+349.006415434" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.293003 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2md2f"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.293889 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2md2f" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="registry-server" containerID="cri-o://7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06" gracePeriod=30 Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.296834 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5pcs"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.297108 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c5pcs" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="registry-server" containerID="cri-o://dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b" gracePeriod=30 Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.311955 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-czv2v"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.312202 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" containerID="cri-o://4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4" gracePeriod=30 Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.322292 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9thj"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.323102 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9thj" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="registry-server" containerID="cri-o://f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954" gracePeriod=30 Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.331129 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9g5pw"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.331381 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9g5pw" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="registry-server" containerID="cri-o://1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41" gracePeriod=30 Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.337659 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-spsms"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.339275 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.355237 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-spsms"] Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.483794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6slr\" (UniqueName: \"kubernetes.io/projected/3fda7467-b273-42f8-a470-89697e7b7a53-kube-api-access-g6slr\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.483912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fda7467-b273-42f8-a470-89697e7b7a53-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.483952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3fda7467-b273-42f8-a470-89697e7b7a53-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.585573 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3fda7467-b273-42f8-a470-89697e7b7a53-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.585663 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6slr\" (UniqueName: \"kubernetes.io/projected/3fda7467-b273-42f8-a470-89697e7b7a53-kube-api-access-g6slr\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.585739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fda7467-b273-42f8-a470-89697e7b7a53-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.587017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3fda7467-b273-42f8-a470-89697e7b7a53-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.591395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3fda7467-b273-42f8-a470-89697e7b7a53-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.609531 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6slr\" (UniqueName: \"kubernetes.io/projected/3fda7467-b273-42f8-a470-89697e7b7a53-kube-api-access-g6slr\") pod \"marketplace-operator-79b997595-spsms\" (UID: \"3fda7467-b273-42f8-a470-89697e7b7a53\") " pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.664846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.829883 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.995562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljsbs\" (UniqueName: \"kubernetes.io/projected/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-kube-api-access-ljsbs\") pod \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.995640 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-utilities\") pod \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.995751 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-catalog-content\") pod \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\" (UID: \"a04358f4-8a2a-4acf-8607-0afc9ffceb9f\") " Jan 27 14:17:37 crc kubenswrapper[4833]: I0127 14:17:37.996572 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-utilities" (OuterVolumeSpecName: "utilities") pod "a04358f4-8a2a-4acf-8607-0afc9ffceb9f" (UID: "a04358f4-8a2a-4acf-8607-0afc9ffceb9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.001190 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-kube-api-access-ljsbs" (OuterVolumeSpecName: "kube-api-access-ljsbs") pod "a04358f4-8a2a-4acf-8607-0afc9ffceb9f" (UID: "a04358f4-8a2a-4acf-8607-0afc9ffceb9f"). InnerVolumeSpecName "kube-api-access-ljsbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.019646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a04358f4-8a2a-4acf-8607-0afc9ffceb9f" (UID: "a04358f4-8a2a-4acf-8607-0afc9ffceb9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.097157 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljsbs\" (UniqueName: \"kubernetes.io/projected/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-kube-api-access-ljsbs\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.097192 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.097203 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a04358f4-8a2a-4acf-8607-0afc9ffceb9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.109699 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.170380 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.185399 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.197663 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxdk4\" (UniqueName: \"kubernetes.io/projected/700f73dc-a4b4-402c-acd7-dd23692ff53a-kube-api-access-kxdk4\") pod \"700f73dc-a4b4-402c-acd7-dd23692ff53a\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.197748 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-utilities\") pod \"700f73dc-a4b4-402c-acd7-dd23692ff53a\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.198156 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-catalog-content\") pod \"700f73dc-a4b4-402c-acd7-dd23692ff53a\" (UID: \"700f73dc-a4b4-402c-acd7-dd23692ff53a\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.198641 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-utilities" (OuterVolumeSpecName: "utilities") pod "700f73dc-a4b4-402c-acd7-dd23692ff53a" (UID: "700f73dc-a4b4-402c-acd7-dd23692ff53a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.202477 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.206381 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700f73dc-a4b4-402c-acd7-dd23692ff53a-kube-api-access-kxdk4" (OuterVolumeSpecName: "kube-api-access-kxdk4") pod "700f73dc-a4b4-402c-acd7-dd23692ff53a" (UID: "700f73dc-a4b4-402c-acd7-dd23692ff53a"). InnerVolumeSpecName "kube-api-access-kxdk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.260142 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "700f73dc-a4b4-402c-acd7-dd23692ff53a" (UID: "700f73dc-a4b4-402c-acd7-dd23692ff53a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.298616 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-spsms"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.298916 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnzpg\" (UniqueName: \"kubernetes.io/projected/226dae94-d6a8-45f8-99e4-ec29189f0bd5-kube-api-access-dnzpg\") pod \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.298983 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f116ab69-14f9-4136-904c-730947658d83-marketplace-operator-metrics\") pod \"f116ab69-14f9-4136-904c-730947658d83\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299034 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-catalog-content\") pod \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-utilities\") pod \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\" (UID: \"226dae94-d6a8-45f8-99e4-ec29189f0bd5\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pr2r\" (UniqueName: \"kubernetes.io/projected/e3c453af-dcdd-449a-b09b-dc6076b3b07a-kube-api-access-8pr2r\") pod \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299160 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-utilities\") pod \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299186 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-catalog-content\") pod \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\" (UID: \"e3c453af-dcdd-449a-b09b-dc6076b3b07a\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299212 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72wrc\" (UniqueName: \"kubernetes.io/projected/f116ab69-14f9-4136-904c-730947658d83-kube-api-access-72wrc\") pod \"f116ab69-14f9-4136-904c-730947658d83\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299261 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f116ab69-14f9-4136-904c-730947658d83-marketplace-trusted-ca\") pod \"f116ab69-14f9-4136-904c-730947658d83\" (UID: \"f116ab69-14f9-4136-904c-730947658d83\") " Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299536 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299553 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700f73dc-a4b4-402c-acd7-dd23692ff53a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299568 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxdk4\" (UniqueName: \"kubernetes.io/projected/700f73dc-a4b4-402c-acd7-dd23692ff53a-kube-api-access-kxdk4\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299882 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-utilities" (OuterVolumeSpecName: "utilities") pod "226dae94-d6a8-45f8-99e4-ec29189f0bd5" (UID: "226dae94-d6a8-45f8-99e4-ec29189f0bd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.299906 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-utilities" (OuterVolumeSpecName: "utilities") pod "e3c453af-dcdd-449a-b09b-dc6076b3b07a" (UID: "e3c453af-dcdd-449a-b09b-dc6076b3b07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.300193 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f116ab69-14f9-4136-904c-730947658d83-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f116ab69-14f9-4136-904c-730947658d83" (UID: "f116ab69-14f9-4136-904c-730947658d83"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.306304 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f116ab69-14f9-4136-904c-730947658d83-kube-api-access-72wrc" (OuterVolumeSpecName: "kube-api-access-72wrc") pod "f116ab69-14f9-4136-904c-730947658d83" (UID: "f116ab69-14f9-4136-904c-730947658d83"). InnerVolumeSpecName "kube-api-access-72wrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.306916 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c453af-dcdd-449a-b09b-dc6076b3b07a-kube-api-access-8pr2r" (OuterVolumeSpecName: "kube-api-access-8pr2r") pod "e3c453af-dcdd-449a-b09b-dc6076b3b07a" (UID: "e3c453af-dcdd-449a-b09b-dc6076b3b07a"). InnerVolumeSpecName "kube-api-access-8pr2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.308718 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226dae94-d6a8-45f8-99e4-ec29189f0bd5-kube-api-access-dnzpg" (OuterVolumeSpecName: "kube-api-access-dnzpg") pod "226dae94-d6a8-45f8-99e4-ec29189f0bd5" (UID: "226dae94-d6a8-45f8-99e4-ec29189f0bd5"). InnerVolumeSpecName "kube-api-access-dnzpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.310818 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f116ab69-14f9-4136-904c-730947658d83-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f116ab69-14f9-4136-904c-730947658d83" (UID: "f116ab69-14f9-4136-904c-730947658d83"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.347551 4833 generic.go:334] "Generic (PLEG): container finished" podID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerID="f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954" exitCode=0 Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.347601 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9thj" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.347636 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerDied","Data":"f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.347752 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9thj" event={"ID":"a04358f4-8a2a-4acf-8607-0afc9ffceb9f","Type":"ContainerDied","Data":"30f5f7aefda62043d66c0e28652d470919a75ed49a5633f4314559033207993c"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.347809 4833 scope.go:117] "RemoveContainer" containerID="f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.350626 4833 generic.go:334] "Generic (PLEG): container finished" podID="f116ab69-14f9-4136-904c-730947658d83" containerID="4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4" exitCode=0 Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.350686 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" event={"ID":"f116ab69-14f9-4136-904c-730947658d83","Type":"ContainerDied","Data":"4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.350720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" event={"ID":"f116ab69-14f9-4136-904c-730947658d83","Type":"ContainerDied","Data":"301526166a1d705f3a6a8bfcee80572ffd157706990629b652763455648272ce"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.350773 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-czv2v" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.359381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" event={"ID":"3fda7467-b273-42f8-a470-89697e7b7a53","Type":"ContainerStarted","Data":"93c9baacfc962f23a3b1b54236d51063d6f7fd3e862a28ca515a0e0a107a62d6"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.364034 4833 generic.go:334] "Generic (PLEG): container finished" podID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerID="1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41" exitCode=0 Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.364125 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerDied","Data":"1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.364155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g5pw" event={"ID":"e3c453af-dcdd-449a-b09b-dc6076b3b07a","Type":"ContainerDied","Data":"926c67d74a9a051e833a092e11c1f7af6e6d872c18e72cda096389565e0c18b6"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.364222 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g5pw" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.368875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "226dae94-d6a8-45f8-99e4-ec29189f0bd5" (UID: "226dae94-d6a8-45f8-99e4-ec29189f0bd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.370846 4833 generic.go:334] "Generic (PLEG): container finished" podID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerID="7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06" exitCode=0 Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.370880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerDied","Data":"7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.370930 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2md2f" event={"ID":"700f73dc-a4b4-402c-acd7-dd23692ff53a","Type":"ContainerDied","Data":"8d25586790bb3e623e52d09bace89ab05c2c827205d9e904abd216aa43696e4d"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.370951 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2md2f" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.373069 4833 generic.go:334] "Generic (PLEG): container finished" podID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerID="dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b" exitCode=0 Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.373099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerDied","Data":"dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.373124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pcs" event={"ID":"226dae94-d6a8-45f8-99e4-ec29189f0bd5","Type":"ContainerDied","Data":"98e0cddadcb7a602e5553cba74b3dc7906ebeb75f362b5eecf9e0980f434db75"} Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.373134 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pcs" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.376042 4833 scope.go:117] "RemoveContainer" containerID="4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.401647 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9thj"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402383 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f116ab69-14f9-4136-904c-730947658d83-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402402 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnzpg\" (UniqueName: \"kubernetes.io/projected/226dae94-d6a8-45f8-99e4-ec29189f0bd5-kube-api-access-dnzpg\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402411 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f116ab69-14f9-4136-904c-730947658d83-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402423 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402432 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/226dae94-d6a8-45f8-99e4-ec29189f0bd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402441 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pr2r\" (UniqueName: \"kubernetes.io/projected/e3c453af-dcdd-449a-b09b-dc6076b3b07a-kube-api-access-8pr2r\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402460 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.402470 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72wrc\" (UniqueName: \"kubernetes.io/projected/f116ab69-14f9-4136-904c-730947658d83-kube-api-access-72wrc\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.405034 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9thj"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.412486 4833 scope.go:117] "RemoveContainer" containerID="61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.421950 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2md2f"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.431769 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2md2f"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.436622 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-czv2v"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.457225 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-czv2v"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.461042 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5pcs"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.464163 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c5pcs"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.473905 4833 scope.go:117] "RemoveContainer" containerID="f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.474797 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954\": container with ID starting with f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954 not found: ID does not exist" containerID="f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.474854 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954"} err="failed to get container status \"f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954\": rpc error: code = NotFound desc = could not find container \"f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954\": container with ID starting with f930aa35859b09bfeece2bb5b864e8528853a36937bae71edb284c9d6adfd954 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.474887 4833 scope.go:117] "RemoveContainer" containerID="4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.475377 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c\": container with ID starting with 4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c not found: ID does not exist" containerID="4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.475418 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c"} err="failed to get container status \"4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c\": rpc error: code = NotFound desc = could not find container \"4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c\": container with ID starting with 4133e72d37322ed7b56f0c73d836791f7bf186edc6f6a20b989dc09697a3d88c not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.475503 4833 scope.go:117] "RemoveContainer" containerID="61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.476005 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321\": container with ID starting with 61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321 not found: ID does not exist" containerID="61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.476045 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321"} err="failed to get container status \"61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321\": rpc error: code = NotFound desc = could not find container \"61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321\": container with ID starting with 61e71419c379dcf09e623365bf6ca70515175e5fe13f943cbd1a314f3d677321 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.476247 4833 scope.go:117] "RemoveContainer" containerID="4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.480163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3c453af-dcdd-449a-b09b-dc6076b3b07a" (UID: "e3c453af-dcdd-449a-b09b-dc6076b3b07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.494715 4833 scope.go:117] "RemoveContainer" containerID="4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.495799 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4\": container with ID starting with 4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4 not found: ID does not exist" containerID="4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.495850 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4"} err="failed to get container status \"4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4\": rpc error: code = NotFound desc = could not find container \"4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4\": container with ID starting with 4c21cc1a3691627bc9fc57fd6c03c783e4c665a02e12c065f8e5c58ce971c9a4 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.495885 4833 scope.go:117] "RemoveContainer" containerID="1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.505614 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3c453af-dcdd-449a-b09b-dc6076b3b07a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.513422 4833 scope.go:117] "RemoveContainer" containerID="2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.538941 4833 scope.go:117] "RemoveContainer" containerID="d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.555389 4833 scope.go:117] "RemoveContainer" containerID="1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.556039 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41\": container with ID starting with 1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41 not found: ID does not exist" containerID="1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.556091 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41"} err="failed to get container status \"1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41\": rpc error: code = NotFound desc = could not find container \"1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41\": container with ID starting with 1137b3e479abf9696739c5f15d4f36eeb70d378950ad1e7b438097e36c33bc41 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.556126 4833 scope.go:117] "RemoveContainer" containerID="2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.557439 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41\": container with ID starting with 2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41 not found: ID does not exist" containerID="2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.557484 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41"} err="failed to get container status \"2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41\": rpc error: code = NotFound desc = could not find container \"2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41\": container with ID starting with 2de70542090566c8088bf004f007e25772301c304e82474e79708af64efffe41 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.557501 4833 scope.go:117] "RemoveContainer" containerID="d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.557789 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9\": container with ID starting with d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9 not found: ID does not exist" containerID="d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.557822 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9"} err="failed to get container status \"d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9\": rpc error: code = NotFound desc = could not find container \"d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9\": container with ID starting with d03a19e60ac3a2fe6eabd6e0ef0416ce28baeeda33cbc3ca71d76fe50e7edda9 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.557846 4833 scope.go:117] "RemoveContainer" containerID="7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.576591 4833 scope.go:117] "RemoveContainer" containerID="0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.594218 4833 scope.go:117] "RemoveContainer" containerID="99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.611626 4833 scope.go:117] "RemoveContainer" containerID="7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.612272 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06\": container with ID starting with 7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06 not found: ID does not exist" containerID="7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.612316 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06"} err="failed to get container status \"7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06\": rpc error: code = NotFound desc = could not find container \"7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06\": container with ID starting with 7514dfba5ec620107d82443a38284005890ab93f7bd0147d544a4cdb01bf0a06 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.612352 4833 scope.go:117] "RemoveContainer" containerID="0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.612876 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635\": container with ID starting with 0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635 not found: ID does not exist" containerID="0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.612916 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635"} err="failed to get container status \"0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635\": rpc error: code = NotFound desc = could not find container \"0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635\": container with ID starting with 0243d9c3d3ef06dfb4dd3970c15d1e9ce4a2e02df4c503a72b88dc2638fae635 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.612957 4833 scope.go:117] "RemoveContainer" containerID="99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.613494 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13\": container with ID starting with 99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13 not found: ID does not exist" containerID="99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.613529 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13"} err="failed to get container status \"99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13\": rpc error: code = NotFound desc = could not find container \"99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13\": container with ID starting with 99edb46c87b3de495209f21c22e5fa4e0d6bd4feb1632b7e6a428cbb272aab13 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.613551 4833 scope.go:117] "RemoveContainer" containerID="dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.632315 4833 scope.go:117] "RemoveContainer" containerID="c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.649292 4833 scope.go:117] "RemoveContainer" containerID="e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.661990 4833 scope.go:117] "RemoveContainer" containerID="dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.662385 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b\": container with ID starting with dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b not found: ID does not exist" containerID="dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.662425 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b"} err="failed to get container status \"dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b\": rpc error: code = NotFound desc = could not find container \"dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b\": container with ID starting with dd65cb9cd444a3caf901556a14d6bac092732ef6fe2fe1182c1a190d21ec012b not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.662472 4833 scope.go:117] "RemoveContainer" containerID="c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.662867 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4\": container with ID starting with c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4 not found: ID does not exist" containerID="c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.662901 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4"} err="failed to get container status \"c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4\": rpc error: code = NotFound desc = could not find container \"c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4\": container with ID starting with c384ea2723312fb837bc81f6fdfe5bb820e04d8b072b28d7d613df8b3f3755a4 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.662919 4833 scope.go:117] "RemoveContainer" containerID="e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383" Jan 27 14:17:38 crc kubenswrapper[4833]: E0127 14:17:38.663203 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383\": container with ID starting with e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383 not found: ID does not exist" containerID="e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.663229 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383"} err="failed to get container status \"e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383\": rpc error: code = NotFound desc = could not find container \"e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383\": container with ID starting with e43be8a4ec7736a251d7c66af8fbfc6f267cb5658648808f7d745d8ba81c9383 not found: ID does not exist" Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.697827 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9g5pw"] Jan 27 14:17:38 crc kubenswrapper[4833]: I0127 14:17:38.702109 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9g5pw"] Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.229286 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" path="/var/lib/kubelet/pods/226dae94-d6a8-45f8-99e4-ec29189f0bd5/volumes" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.230483 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" path="/var/lib/kubelet/pods/700f73dc-a4b4-402c-acd7-dd23692ff53a/volumes" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.231182 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" path="/var/lib/kubelet/pods/a04358f4-8a2a-4acf-8607-0afc9ffceb9f/volumes" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.232731 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" path="/var/lib/kubelet/pods/e3c453af-dcdd-449a-b09b-dc6076b3b07a/volumes" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.233606 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f116ab69-14f9-4136-904c-730947658d83" path="/var/lib/kubelet/pods/f116ab69-14f9-4136-904c-730947658d83/volumes" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.385786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" event={"ID":"3fda7467-b273-42f8-a470-89697e7b7a53","Type":"ContainerStarted","Data":"a68a9872e090f7e878e634540cee4528f5fe2de15836b815f3aca7878a3f1d27"} Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.385937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.391626 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.404296 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-spsms" podStartSLOduration=2.404278575 podStartE2EDuration="2.404278575s" podCreationTimestamp="2026-01-27 14:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:17:39.401158748 +0000 UTC m=+361.052483160" watchObservedRunningTime="2026-01-27 14:17:39.404278575 +0000 UTC m=+361.055602977" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.511971 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ft9"] Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512198 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512210 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512227 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512234 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512245 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512252 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512261 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512269 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512279 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512286 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512297 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512304 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="extract-utilities" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512312 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512319 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512330 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512336 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512348 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512356 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512365 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512372 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512385 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512392 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512401 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512408 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="extract-content" Jan 27 14:17:39 crc kubenswrapper[4833]: E0127 14:17:39.512420 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512428 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512572 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04358f4-8a2a-4acf-8607-0afc9ffceb9f" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512583 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c453af-dcdd-449a-b09b-dc6076b3b07a" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512595 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="226dae94-d6a8-45f8-99e4-ec29189f0bd5" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512606 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f116ab69-14f9-4136-904c-730947658d83" containerName="marketplace-operator" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.512618 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="700f73dc-a4b4-402c-acd7-dd23692ff53a" containerName="registry-server" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.513412 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.516659 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.524239 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ft9"] Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.630396 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275450d-1578-4d06-955f-c667eadb6a3b-utilities\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.630527 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfbrz\" (UniqueName: \"kubernetes.io/projected/3275450d-1578-4d06-955f-c667eadb6a3b-kube-api-access-hfbrz\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.630573 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275450d-1578-4d06-955f-c667eadb6a3b-catalog-content\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.707042 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7fnl"] Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.708658 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.710367 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.719992 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7fnl"] Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.731732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275450d-1578-4d06-955f-c667eadb6a3b-utilities\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.731820 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfbrz\" (UniqueName: \"kubernetes.io/projected/3275450d-1578-4d06-955f-c667eadb6a3b-kube-api-access-hfbrz\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.731887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275450d-1578-4d06-955f-c667eadb6a3b-catalog-content\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.733183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3275450d-1578-4d06-955f-c667eadb6a3b-utilities\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.733245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3275450d-1578-4d06-955f-c667eadb6a3b-catalog-content\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.775637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfbrz\" (UniqueName: \"kubernetes.io/projected/3275450d-1578-4d06-955f-c667eadb6a3b-kube-api-access-hfbrz\") pod \"redhat-marketplace-p6ft9\" (UID: \"3275450d-1578-4d06-955f-c667eadb6a3b\") " pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.833576 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818cb4-af75-444f-8e33-25cf79769b03-utilities\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.833837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818cb4-af75-444f-8e33-25cf79769b03-catalog-content\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.833923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pxf\" (UniqueName: \"kubernetes.io/projected/dd818cb4-af75-444f-8e33-25cf79769b03-kube-api-access-j5pxf\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.834287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.936060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818cb4-af75-444f-8e33-25cf79769b03-utilities\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.936576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818cb4-af75-444f-8e33-25cf79769b03-catalog-content\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.936659 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5pxf\" (UniqueName: \"kubernetes.io/projected/dd818cb4-af75-444f-8e33-25cf79769b03-kube-api-access-j5pxf\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.936706 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818cb4-af75-444f-8e33-25cf79769b03-utilities\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.937030 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818cb4-af75-444f-8e33-25cf79769b03-catalog-content\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:39 crc kubenswrapper[4833]: I0127 14:17:39.963318 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5pxf\" (UniqueName: \"kubernetes.io/projected/dd818cb4-af75-444f-8e33-25cf79769b03-kube-api-access-j5pxf\") pod \"redhat-operators-b7fnl\" (UID: \"dd818cb4-af75-444f-8e33-25cf79769b03\") " pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:40 crc kubenswrapper[4833]: I0127 14:17:40.060063 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:40 crc kubenswrapper[4833]: I0127 14:17:40.230576 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p6ft9"] Jan 27 14:17:40 crc kubenswrapper[4833]: W0127 14:17:40.235581 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3275450d_1578_4d06_955f_c667eadb6a3b.slice/crio-282affbef31d7d0a9cd0702c7d95247a4c4cd835f1a2b5ba47582e931da31122 WatchSource:0}: Error finding container 282affbef31d7d0a9cd0702c7d95247a4c4cd835f1a2b5ba47582e931da31122: Status 404 returned error can't find the container with id 282affbef31d7d0a9cd0702c7d95247a4c4cd835f1a2b5ba47582e931da31122 Jan 27 14:17:40 crc kubenswrapper[4833]: I0127 14:17:40.399174 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ft9" event={"ID":"3275450d-1578-4d06-955f-c667eadb6a3b","Type":"ContainerStarted","Data":"282affbef31d7d0a9cd0702c7d95247a4c4cd835f1a2b5ba47582e931da31122"} Jan 27 14:17:40 crc kubenswrapper[4833]: W0127 14:17:40.462437 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd818cb4_af75_444f_8e33_25cf79769b03.slice/crio-300df97a44ec5583c3e34874b07df7fd5497551cd809da3cfc2cd6615e0e8bfc WatchSource:0}: Error finding container 300df97a44ec5583c3e34874b07df7fd5497551cd809da3cfc2cd6615e0e8bfc: Status 404 returned error can't find the container with id 300df97a44ec5583c3e34874b07df7fd5497551cd809da3cfc2cd6615e0e8bfc Jan 27 14:17:40 crc kubenswrapper[4833]: I0127 14:17:40.464530 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7fnl"] Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.405477 4833 generic.go:334] "Generic (PLEG): container finished" podID="3275450d-1578-4d06-955f-c667eadb6a3b" containerID="26ebfafc5849b99e74ada1167442a8b4071a0e4e447dd429fb2544e1e0adb86d" exitCode=0 Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.405580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ft9" event={"ID":"3275450d-1578-4d06-955f-c667eadb6a3b","Type":"ContainerDied","Data":"26ebfafc5849b99e74ada1167442a8b4071a0e4e447dd429fb2544e1e0adb86d"} Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.411810 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd818cb4-af75-444f-8e33-25cf79769b03" containerID="d75bc433be3c4cf019374355479513bcfaf989557dead3e8c527043cd4ea5af5" exitCode=0 Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.411905 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fnl" event={"ID":"dd818cb4-af75-444f-8e33-25cf79769b03","Type":"ContainerDied","Data":"d75bc433be3c4cf019374355479513bcfaf989557dead3e8c527043cd4ea5af5"} Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.411938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fnl" event={"ID":"dd818cb4-af75-444f-8e33-25cf79769b03","Type":"ContainerStarted","Data":"300df97a44ec5583c3e34874b07df7fd5497551cd809da3cfc2cd6615e0e8bfc"} Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.902501 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8vfp"] Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.904005 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.907153 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:17:41 crc kubenswrapper[4833]: I0127 14:17:41.914592 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8vfp"] Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.062974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-catalog-content\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.063031 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-utilities\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.063162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kgm\" (UniqueName: \"kubernetes.io/projected/debecd7b-5b83-4347-9c4f-bb33d20975e5-kube-api-access-q7kgm\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.105683 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tghcg"] Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.106862 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.108645 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.111643 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tghcg"] Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.165006 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-catalog-content\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.165048 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-utilities\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.165077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kgm\" (UniqueName: \"kubernetes.io/projected/debecd7b-5b83-4347-9c4f-bb33d20975e5-kube-api-access-q7kgm\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.165368 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-catalog-content\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.165689 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-utilities\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.185875 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kgm\" (UniqueName: \"kubernetes.io/projected/debecd7b-5b83-4347-9c4f-bb33d20975e5-kube-api-access-q7kgm\") pod \"certified-operators-j8vfp\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.265992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-catalog-content\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.266044 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwb2h\" (UniqueName: \"kubernetes.io/projected/8157c8c1-501c-43cf-a42c-1e2a48f6a038-kube-api-access-lwb2h\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.266086 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-utilities\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.266306 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.368420 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-catalog-content\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.370093 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwb2h\" (UniqueName: \"kubernetes.io/projected/8157c8c1-501c-43cf-a42c-1e2a48f6a038-kube-api-access-lwb2h\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.370140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-utilities\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.370320 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-catalog-content\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.370666 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-utilities\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.389914 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwb2h\" (UniqueName: \"kubernetes.io/projected/8157c8c1-501c-43cf-a42c-1e2a48f6a038-kube-api-access-lwb2h\") pod \"community-operators-tghcg\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.419927 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fnl" event={"ID":"dd818cb4-af75-444f-8e33-25cf79769b03","Type":"ContainerStarted","Data":"b6f9151f03b37d7edfb6968426a6b55be31db350fa7be1d67961e0f3c651c7da"} Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.424677 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.673076 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8vfp"] Jan 27 14:17:42 crc kubenswrapper[4833]: W0127 14:17:42.674744 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddebecd7b_5b83_4347_9c4f_bb33d20975e5.slice/crio-e259eac1aef3c7a57233402f125aef467a869d0b7f654a3dc6ed545da08803fc WatchSource:0}: Error finding container e259eac1aef3c7a57233402f125aef467a869d0b7f654a3dc6ed545da08803fc: Status 404 returned error can't find the container with id e259eac1aef3c7a57233402f125aef467a869d0b7f654a3dc6ed545da08803fc Jan 27 14:17:42 crc kubenswrapper[4833]: I0127 14:17:42.816934 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tghcg"] Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.427210 4833 generic.go:334] "Generic (PLEG): container finished" podID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerID="d2f4fe6d3b2ca37d383b673c3a42ca32e44c3e22354a8fb9242f6c0e46d85047" exitCode=0 Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.427636 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerDied","Data":"d2f4fe6d3b2ca37d383b673c3a42ca32e44c3e22354a8fb9242f6c0e46d85047"} Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.427748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerStarted","Data":"e259eac1aef3c7a57233402f125aef467a869d0b7f654a3dc6ed545da08803fc"} Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.439347 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd818cb4-af75-444f-8e33-25cf79769b03" containerID="b6f9151f03b37d7edfb6968426a6b55be31db350fa7be1d67961e0f3c651c7da" exitCode=0 Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.439492 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fnl" event={"ID":"dd818cb4-af75-444f-8e33-25cf79769b03","Type":"ContainerDied","Data":"b6f9151f03b37d7edfb6968426a6b55be31db350fa7be1d67961e0f3c651c7da"} Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.449172 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tghcg" event={"ID":"8157c8c1-501c-43cf-a42c-1e2a48f6a038","Type":"ContainerDied","Data":"92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053"} Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.449535 4833 generic.go:334] "Generic (PLEG): container finished" podID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerID="92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053" exitCode=0 Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.449683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tghcg" event={"ID":"8157c8c1-501c-43cf-a42c-1e2a48f6a038","Type":"ContainerStarted","Data":"4345fbf73c4344bc051cf19c9afd2b81404b06ac1bc13841bd328120efe95812"} Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.453967 4833 generic.go:334] "Generic (PLEG): container finished" podID="3275450d-1578-4d06-955f-c667eadb6a3b" containerID="4a104582a338179aa88451384bb118a781355abca8a3decadb9e499d97582e7e" exitCode=0 Jan 27 14:17:43 crc kubenswrapper[4833]: I0127 14:17:43.453996 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ft9" event={"ID":"3275450d-1578-4d06-955f-c667eadb6a3b","Type":"ContainerDied","Data":"4a104582a338179aa88451384bb118a781355abca8a3decadb9e499d97582e7e"} Jan 27 14:17:44 crc kubenswrapper[4833]: I0127 14:17:44.255583 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7tkrx" Jan 27 14:17:44 crc kubenswrapper[4833]: I0127 14:17:44.315502 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzhbs"] Jan 27 14:17:44 crc kubenswrapper[4833]: I0127 14:17:44.460509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p6ft9" event={"ID":"3275450d-1578-4d06-955f-c667eadb6a3b","Type":"ContainerStarted","Data":"5a0119a7e73df22f063e96c61ee387793dc80e47e850d587e6517c93e09e7733"} Jan 27 14:17:44 crc kubenswrapper[4833]: I0127 14:17:44.477223 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p6ft9" podStartSLOduration=3.038703745 podStartE2EDuration="5.477204825s" podCreationTimestamp="2026-01-27 14:17:39 +0000 UTC" firstStartedPulling="2026-01-27 14:17:41.407691624 +0000 UTC m=+363.059016036" lastFinishedPulling="2026-01-27 14:17:43.846192714 +0000 UTC m=+365.497517116" observedRunningTime="2026-01-27 14:17:44.473516693 +0000 UTC m=+366.124841115" watchObservedRunningTime="2026-01-27 14:17:44.477204825 +0000 UTC m=+366.128529227" Jan 27 14:17:45 crc kubenswrapper[4833]: I0127 14:17:45.467045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fnl" event={"ID":"dd818cb4-af75-444f-8e33-25cf79769b03","Type":"ContainerStarted","Data":"0a275d2d47b52c01d4ee7dec747b5a955b7f06f4054dbee4cd266f92fa6012f3"} Jan 27 14:17:45 crc kubenswrapper[4833]: I0127 14:17:45.469429 4833 generic.go:334] "Generic (PLEG): container finished" podID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerID="c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6" exitCode=0 Jan 27 14:17:45 crc kubenswrapper[4833]: I0127 14:17:45.469529 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tghcg" event={"ID":"8157c8c1-501c-43cf-a42c-1e2a48f6a038","Type":"ContainerDied","Data":"c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6"} Jan 27 14:17:45 crc kubenswrapper[4833]: I0127 14:17:45.472593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerStarted","Data":"63942fa7869c24c73d377c2535deeffdc53e82fa4131a364d5302fc5e72eb3a3"} Jan 27 14:17:45 crc kubenswrapper[4833]: I0127 14:17:45.483292 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7fnl" podStartSLOduration=2.849590435 podStartE2EDuration="6.483274217s" podCreationTimestamp="2026-01-27 14:17:39 +0000 UTC" firstStartedPulling="2026-01-27 14:17:41.414475473 +0000 UTC m=+363.065799875" lastFinishedPulling="2026-01-27 14:17:45.048159255 +0000 UTC m=+366.699483657" observedRunningTime="2026-01-27 14:17:45.482121998 +0000 UTC m=+367.133446420" watchObservedRunningTime="2026-01-27 14:17:45.483274217 +0000 UTC m=+367.134598619" Jan 27 14:17:46 crc kubenswrapper[4833]: I0127 14:17:46.479885 4833 generic.go:334] "Generic (PLEG): container finished" podID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerID="63942fa7869c24c73d377c2535deeffdc53e82fa4131a364d5302fc5e72eb3a3" exitCode=0 Jan 27 14:17:46 crc kubenswrapper[4833]: I0127 14:17:46.480251 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerDied","Data":"63942fa7869c24c73d377c2535deeffdc53e82fa4131a364d5302fc5e72eb3a3"} Jan 27 14:17:46 crc kubenswrapper[4833]: I0127 14:17:46.485907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tghcg" event={"ID":"8157c8c1-501c-43cf-a42c-1e2a48f6a038","Type":"ContainerStarted","Data":"9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5"} Jan 27 14:17:46 crc kubenswrapper[4833]: I0127 14:17:46.522951 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tghcg" podStartSLOduration=2.059081204 podStartE2EDuration="4.522936463s" podCreationTimestamp="2026-01-27 14:17:42 +0000 UTC" firstStartedPulling="2026-01-27 14:17:43.450813877 +0000 UTC m=+365.102138279" lastFinishedPulling="2026-01-27 14:17:45.914669136 +0000 UTC m=+367.565993538" observedRunningTime="2026-01-27 14:17:46.519821685 +0000 UTC m=+368.171146087" watchObservedRunningTime="2026-01-27 14:17:46.522936463 +0000 UTC m=+368.174260865" Jan 27 14:17:47 crc kubenswrapper[4833]: I0127 14:17:47.497462 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerStarted","Data":"d1218809ebe407d0cc6fb0f2ffb771136ce8b2ac9470b5890c5fcfc2eda7e994"} Jan 27 14:17:47 crc kubenswrapper[4833]: I0127 14:17:47.517765 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8vfp" podStartSLOduration=3.08834346 podStartE2EDuration="6.517744926s" podCreationTimestamp="2026-01-27 14:17:41 +0000 UTC" firstStartedPulling="2026-01-27 14:17:43.430005102 +0000 UTC m=+365.081329514" lastFinishedPulling="2026-01-27 14:17:46.859406578 +0000 UTC m=+368.510730980" observedRunningTime="2026-01-27 14:17:47.516420533 +0000 UTC m=+369.167744955" watchObservedRunningTime="2026-01-27 14:17:47.517744926 +0000 UTC m=+369.169069338" Jan 27 14:17:49 crc kubenswrapper[4833]: I0127 14:17:49.835344 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:49 crc kubenswrapper[4833]: I0127 14:17:49.836102 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:49 crc kubenswrapper[4833]: I0127 14:17:49.883229 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:50 crc kubenswrapper[4833]: I0127 14:17:50.060975 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:50 crc kubenswrapper[4833]: I0127 14:17:50.061080 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:17:50 crc kubenswrapper[4833]: I0127 14:17:50.547837 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p6ft9" Jan 27 14:17:51 crc kubenswrapper[4833]: I0127 14:17:51.104467 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7fnl" podUID="dd818cb4-af75-444f-8e33-25cf79769b03" containerName="registry-server" probeResult="failure" output=< Jan 27 14:17:51 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:17:51 crc kubenswrapper[4833]: > Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.267903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.267996 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.307895 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.425667 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.425706 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.466170 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.558411 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:17:52 crc kubenswrapper[4833]: I0127 14:17:52.560391 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:18:00 crc kubenswrapper[4833]: I0127 14:18:00.099160 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:18:00 crc kubenswrapper[4833]: I0127 14:18:00.146706 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7fnl" Jan 27 14:18:02 crc kubenswrapper[4833]: I0127 14:18:02.261279 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:18:02 crc kubenswrapper[4833]: I0127 14:18:02.261347 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.360323 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" podUID="4caae65f-8437-4b6d-ae10-e0ac8625e4b4" containerName="registry" containerID="cri-o://54bd27266da270b60190249fb963df2ecc7e9e5f7eb923180cc16652a496b01b" gracePeriod=30 Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.604531 4833 generic.go:334] "Generic (PLEG): container finished" podID="4caae65f-8437-4b6d-ae10-e0ac8625e4b4" containerID="54bd27266da270b60190249fb963df2ecc7e9e5f7eb923180cc16652a496b01b" exitCode=0 Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.604790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" event={"ID":"4caae65f-8437-4b6d-ae10-e0ac8625e4b4","Type":"ContainerDied","Data":"54bd27266da270b60190249fb963df2ecc7e9e5f7eb923180cc16652a496b01b"} Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.748746 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.869739 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-installation-pull-secrets\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.869985 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t92rr\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-kube-api-access-t92rr\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.870120 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-certificates\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.870142 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-tls\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.870176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-trusted-ca\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.870320 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.870371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-ca-trust-extracted\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.870388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-bound-sa-token\") pod \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\" (UID: \"4caae65f-8437-4b6d-ae10-e0ac8625e4b4\") " Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.871082 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.871491 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.875812 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.876163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.878099 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.879261 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.883696 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-kube-api-access-t92rr" (OuterVolumeSpecName: "kube-api-access-t92rr") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "kube-api-access-t92rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.887055 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4caae65f-8437-4b6d-ae10-e0ac8625e4b4" (UID: "4caae65f-8437-4b6d-ae10-e0ac8625e4b4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972536 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972683 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972706 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972728 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t92rr\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-kube-api-access-t92rr\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972746 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972765 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:09 crc kubenswrapper[4833]: I0127 14:18:09.972780 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4caae65f-8437-4b6d-ae10-e0ac8625e4b4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:18:10 crc kubenswrapper[4833]: I0127 14:18:10.611626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" event={"ID":"4caae65f-8437-4b6d-ae10-e0ac8625e4b4","Type":"ContainerDied","Data":"f197c327ad9e3a89d6d33b2854103fe5d3f95062d047f69a627582a988ed4a08"} Jan 27 14:18:10 crc kubenswrapper[4833]: I0127 14:18:10.611710 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xzhbs" Jan 27 14:18:10 crc kubenswrapper[4833]: I0127 14:18:10.611745 4833 scope.go:117] "RemoveContainer" containerID="54bd27266da270b60190249fb963df2ecc7e9e5f7eb923180cc16652a496b01b" Jan 27 14:18:10 crc kubenswrapper[4833]: I0127 14:18:10.638674 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzhbs"] Jan 27 14:18:10 crc kubenswrapper[4833]: I0127 14:18:10.645089 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xzhbs"] Jan 27 14:18:11 crc kubenswrapper[4833]: I0127 14:18:11.218710 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4caae65f-8437-4b6d-ae10-e0ac8625e4b4" path="/var/lib/kubelet/pods/4caae65f-8437-4b6d-ae10-e0ac8625e4b4/volumes" Jan 27 14:18:32 crc kubenswrapper[4833]: I0127 14:18:32.261299 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:18:32 crc kubenswrapper[4833]: I0127 14:18:32.262618 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.261097 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.261716 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.261787 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.262511 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5552fcd59dd0d2efb42e28a33e8a77a2a749dd5d883a8a954866c7b6125815a5"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.262558 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://5552fcd59dd0d2efb42e28a33e8a77a2a749dd5d883a8a954866c7b6125815a5" gracePeriod=600 Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.922607 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="5552fcd59dd0d2efb42e28a33e8a77a2a749dd5d883a8a954866c7b6125815a5" exitCode=0 Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.922673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"5552fcd59dd0d2efb42e28a33e8a77a2a749dd5d883a8a954866c7b6125815a5"} Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.923244 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"46304200ef48984ab84f4b4a5c14acbe072f1585d66ca4c1d6298873b2c4d044"} Jan 27 14:19:02 crc kubenswrapper[4833]: I0127 14:19:02.923280 4833 scope.go:117] "RemoveContainer" containerID="1397359d2ecd52b7cc8546ce30dbf5170b923e25e9c12ee6b898252e7d0fc32b" Jan 27 14:20:39 crc kubenswrapper[4833]: I0127 14:20:39.482354 4833 scope.go:117] "RemoveContainer" containerID="491ba252d1e2837611afbd4387c77dffb21164fce5ca6a8473c6b908fb1ea55a" Jan 27 14:20:39 crc kubenswrapper[4833]: I0127 14:20:39.513042 4833 scope.go:117] "RemoveContainer" containerID="8b46dbbc96dd54a56613220cc713ded7ed463b680f16630d43fe9ee83ea93124" Jan 27 14:21:02 crc kubenswrapper[4833]: I0127 14:21:02.261558 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:21:02 crc kubenswrapper[4833]: I0127 14:21:02.262476 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:21:32 crc kubenswrapper[4833]: I0127 14:21:32.261053 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:21:32 crc kubenswrapper[4833]: I0127 14:21:32.261687 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:22:02 crc kubenswrapper[4833]: I0127 14:22:02.260345 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:22:02 crc kubenswrapper[4833]: I0127 14:22:02.260868 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:22:02 crc kubenswrapper[4833]: I0127 14:22:02.260937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:22:02 crc kubenswrapper[4833]: I0127 14:22:02.261674 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46304200ef48984ab84f4b4a5c14acbe072f1585d66ca4c1d6298873b2c4d044"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:22:02 crc kubenswrapper[4833]: I0127 14:22:02.261797 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://46304200ef48984ab84f4b4a5c14acbe072f1585d66ca4c1d6298873b2c4d044" gracePeriod=600 Jan 27 14:22:03 crc kubenswrapper[4833]: I0127 14:22:03.015952 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="46304200ef48984ab84f4b4a5c14acbe072f1585d66ca4c1d6298873b2c4d044" exitCode=0 Jan 27 14:22:03 crc kubenswrapper[4833]: I0127 14:22:03.016048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"46304200ef48984ab84f4b4a5c14acbe072f1585d66ca4c1d6298873b2c4d044"} Jan 27 14:22:03 crc kubenswrapper[4833]: I0127 14:22:03.016352 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"8a75584c2330835076c24d4403b17ee7a9367704e37be7e4c860bbe6180771fd"} Jan 27 14:22:03 crc kubenswrapper[4833]: I0127 14:22:03.016376 4833 scope.go:117] "RemoveContainer" containerID="5552fcd59dd0d2efb42e28a33e8a77a2a749dd5d883a8a954866c7b6125815a5" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.070130 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-trpwd"] Jan 27 14:23:59 crc kubenswrapper[4833]: E0127 14:23:59.070833 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4caae65f-8437-4b6d-ae10-e0ac8625e4b4" containerName="registry" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.070845 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4caae65f-8437-4b6d-ae10-e0ac8625e4b4" containerName="registry" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.070932 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4caae65f-8437-4b6d-ae10-e0ac8625e4b4" containerName="registry" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.071268 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.074153 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.074349 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ptr5z" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.074548 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.076015 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-cbksb"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.076828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cbksb" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.079673 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-r87cd" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.091341 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-trpwd"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.100404 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gwjhk"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.101242 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.103656 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-p644j" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.112099 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cbksb"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.119109 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gwjhk"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.172327 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wg6k\" (UniqueName: \"kubernetes.io/projected/6090a3c8-feca-42f0-bafb-0886ef1a591a-kube-api-access-7wg6k\") pod \"cert-manager-858654f9db-cbksb\" (UID: \"6090a3c8-feca-42f0-bafb-0886ef1a591a\") " pod="cert-manager/cert-manager-858654f9db-cbksb" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.172394 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvtl2\" (UniqueName: \"kubernetes.io/projected/efb6fbe6-fd59-4072-ac9c-99534d8d97e5-kube-api-access-lvtl2\") pod \"cert-manager-webhook-687f57d79b-gwjhk\" (UID: \"efb6fbe6-fd59-4072-ac9c-99534d8d97e5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.172433 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lnm\" (UniqueName: \"kubernetes.io/projected/d9f55934-05a1-4c77-b428-337683dfb09c-kube-api-access-j9lnm\") pod \"cert-manager-cainjector-cf98fcc89-trpwd\" (UID: \"d9f55934-05a1-4c77-b428-337683dfb09c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.273686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvtl2\" (UniqueName: \"kubernetes.io/projected/efb6fbe6-fd59-4072-ac9c-99534d8d97e5-kube-api-access-lvtl2\") pod \"cert-manager-webhook-687f57d79b-gwjhk\" (UID: \"efb6fbe6-fd59-4072-ac9c-99534d8d97e5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.273778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lnm\" (UniqueName: \"kubernetes.io/projected/d9f55934-05a1-4c77-b428-337683dfb09c-kube-api-access-j9lnm\") pod \"cert-manager-cainjector-cf98fcc89-trpwd\" (UID: \"d9f55934-05a1-4c77-b428-337683dfb09c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.273816 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wg6k\" (UniqueName: \"kubernetes.io/projected/6090a3c8-feca-42f0-bafb-0886ef1a591a-kube-api-access-7wg6k\") pod \"cert-manager-858654f9db-cbksb\" (UID: \"6090a3c8-feca-42f0-bafb-0886ef1a591a\") " pod="cert-manager/cert-manager-858654f9db-cbksb" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.293208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvtl2\" (UniqueName: \"kubernetes.io/projected/efb6fbe6-fd59-4072-ac9c-99534d8d97e5-kube-api-access-lvtl2\") pod \"cert-manager-webhook-687f57d79b-gwjhk\" (UID: \"efb6fbe6-fd59-4072-ac9c-99534d8d97e5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.293246 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wg6k\" (UniqueName: \"kubernetes.io/projected/6090a3c8-feca-42f0-bafb-0886ef1a591a-kube-api-access-7wg6k\") pod \"cert-manager-858654f9db-cbksb\" (UID: \"6090a3c8-feca-42f0-bafb-0886ef1a591a\") " pod="cert-manager/cert-manager-858654f9db-cbksb" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.294396 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lnm\" (UniqueName: \"kubernetes.io/projected/d9f55934-05a1-4c77-b428-337683dfb09c-kube-api-access-j9lnm\") pod \"cert-manager-cainjector-cf98fcc89-trpwd\" (UID: \"d9f55934-05a1-4c77-b428-337683dfb09c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.402337 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.414322 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cbksb" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.427422 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.690101 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gwjhk"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.697629 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.724572 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" event={"ID":"efb6fbe6-fd59-4072-ac9c-99534d8d97e5","Type":"ContainerStarted","Data":"5fe6abe50695c1f4317a5ff7596f0e9114f3eb621cfb97a03cdbf3a7181d5e8d"} Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.839555 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-trpwd"] Jan 27 14:23:59 crc kubenswrapper[4833]: I0127 14:23:59.842674 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cbksb"] Jan 27 14:23:59 crc kubenswrapper[4833]: W0127 14:23:59.843286 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6090a3c8_feca_42f0_bafb_0886ef1a591a.slice/crio-14c8f89ce552bb0ac81aa10603743010ec368caa91e88c5927ab86e1efd62a46 WatchSource:0}: Error finding container 14c8f89ce552bb0ac81aa10603743010ec368caa91e88c5927ab86e1efd62a46: Status 404 returned error can't find the container with id 14c8f89ce552bb0ac81aa10603743010ec368caa91e88c5927ab86e1efd62a46 Jan 27 14:23:59 crc kubenswrapper[4833]: W0127 14:23:59.844849 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9f55934_05a1_4c77_b428_337683dfb09c.slice/crio-a380c4c5e47f265e0c2c6cf0c7fb6b130969d2642fd43e420eab714cf8116561 WatchSource:0}: Error finding container a380c4c5e47f265e0c2c6cf0c7fb6b130969d2642fd43e420eab714cf8116561: Status 404 returned error can't find the container with id a380c4c5e47f265e0c2c6cf0c7fb6b130969d2642fd43e420eab714cf8116561 Jan 27 14:24:00 crc kubenswrapper[4833]: I0127 14:24:00.732877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cbksb" event={"ID":"6090a3c8-feca-42f0-bafb-0886ef1a591a","Type":"ContainerStarted","Data":"14c8f89ce552bb0ac81aa10603743010ec368caa91e88c5927ab86e1efd62a46"} Jan 27 14:24:00 crc kubenswrapper[4833]: I0127 14:24:00.733864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" event={"ID":"d9f55934-05a1-4c77-b428-337683dfb09c","Type":"ContainerStarted","Data":"a380c4c5e47f265e0c2c6cf0c7fb6b130969d2642fd43e420eab714cf8116561"} Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.261087 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.261563 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.753062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" event={"ID":"d9f55934-05a1-4c77-b428-337683dfb09c","Type":"ContainerStarted","Data":"b8b4219034ca6c94f7066fcc7f51bb5812c0b3f20b7a848a67d6cf4628b27ae0"} Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.766687 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" event={"ID":"efb6fbe6-fd59-4072-ac9c-99534d8d97e5","Type":"ContainerStarted","Data":"faae5bf072e3353b6d31b3e296760b816510743ac82c069ef461aa1f00069627"} Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.766867 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.772969 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-trpwd" podStartSLOduration=1.028686102 podStartE2EDuration="3.772953292s" podCreationTimestamp="2026-01-27 14:23:59 +0000 UTC" firstStartedPulling="2026-01-27 14:23:59.846388035 +0000 UTC m=+741.497712437" lastFinishedPulling="2026-01-27 14:24:02.590655215 +0000 UTC m=+744.241979627" observedRunningTime="2026-01-27 14:24:02.76960207 +0000 UTC m=+744.420926482" watchObservedRunningTime="2026-01-27 14:24:02.772953292 +0000 UTC m=+744.424277714" Jan 27 14:24:02 crc kubenswrapper[4833]: I0127 14:24:02.790603 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" podStartSLOduration=0.98013372 podStartE2EDuration="3.790584291s" podCreationTimestamp="2026-01-27 14:23:59 +0000 UTC" firstStartedPulling="2026-01-27 14:23:59.697422049 +0000 UTC m=+741.348746451" lastFinishedPulling="2026-01-27 14:24:02.50787262 +0000 UTC m=+744.159197022" observedRunningTime="2026-01-27 14:24:02.789523715 +0000 UTC m=+744.440848127" watchObservedRunningTime="2026-01-27 14:24:02.790584291 +0000 UTC m=+744.441908703" Jan 27 14:24:04 crc kubenswrapper[4833]: I0127 14:24:04.779147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cbksb" event={"ID":"6090a3c8-feca-42f0-bafb-0886ef1a591a","Type":"ContainerStarted","Data":"2e1fbb12d88ffd6b7646e71f844b568c67229ab77447fe4121533f97de9b6e5d"} Jan 27 14:24:04 crc kubenswrapper[4833]: I0127 14:24:04.830109 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-cbksb" podStartSLOduration=1.95387309 podStartE2EDuration="5.83008715s" podCreationTimestamp="2026-01-27 14:23:59 +0000 UTC" firstStartedPulling="2026-01-27 14:23:59.845249187 +0000 UTC m=+741.496573589" lastFinishedPulling="2026-01-27 14:24:03.721463247 +0000 UTC m=+745.372787649" observedRunningTime="2026-01-27 14:24:04.801828031 +0000 UTC m=+746.453152473" watchObservedRunningTime="2026-01-27 14:24:04.83008715 +0000 UTC m=+746.481411552" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.101571 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jpt5h"] Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102102 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-controller" containerID="cri-o://f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102119 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="nbdb" containerID="cri-o://57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102236 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="northd" containerID="cri-o://21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102304 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102360 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-node" containerID="cri-o://56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102395 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-acl-logging" containerID="cri-o://db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.102630 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="sbdb" containerID="cri-o://d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.139203 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" containerID="cri-o://a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" gracePeriod=30 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.796028 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/3.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.798679 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovn-acl-logging/0.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.799994 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovn-controller/0.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.800458 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.806303 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/2.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.806800 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/1.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.806884 4833 generic.go:334] "Generic (PLEG): container finished" podID="b7a7c135-ca95-4e75-b823-d1e45101a761" containerID="3ff19209ce0ef90cfad465697fd3b41d240f32ef7b2d01dd3d720eaed3f27367" exitCode=2 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.806929 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerDied","Data":"3ff19209ce0ef90cfad465697fd3b41d240f32ef7b2d01dd3d720eaed3f27367"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.807003 4833 scope.go:117] "RemoveContainer" containerID="67c8b1628162d73508fe972750a3b489928092300cf1eba37cb39ff62ea50b1f" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.807566 4833 scope.go:117] "RemoveContainer" containerID="3ff19209ce0ef90cfad465697fd3b41d240f32ef7b2d01dd3d720eaed3f27367" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.807791 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-npb46_openshift-multus(b7a7c135-ca95-4e75-b823-d1e45101a761)\"" pod="openshift-multus/multus-npb46" podUID="b7a7c135-ca95-4e75-b823-d1e45101a761" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.809360 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovnkube-controller/3.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.812996 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovn-acl-logging/0.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.814482 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jpt5h_696d56dd-3ce4-489e-a258-677cf1fd8f9b/ovn-controller/0.log" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815364 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" exitCode=0 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815399 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" exitCode=0 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815416 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" exitCode=0 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815434 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" exitCode=0 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815480 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" exitCode=0 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815495 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" exitCode=0 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815508 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" exitCode=143 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815520 4833 generic.go:334] "Generic (PLEG): container finished" podID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" exitCode=143 Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815589 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815628 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815647 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815665 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815689 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815706 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815717 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815726 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815737 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815746 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815755 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815765 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815775 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815785 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815799 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815815 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815827 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815839 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815848 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815858 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815867 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815877 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815887 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815897 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815906 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815920 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815936 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815947 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815667 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.815959 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816065 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816075 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816082 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816089 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816095 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816101 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816108 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816119 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jpt5h" event={"ID":"696d56dd-3ce4-489e-a258-677cf1fd8f9b","Type":"ContainerDied","Data":"49293bd2774c9d26b84bb88b4ee3d3c1fe4159153f9ca18252bf677735f098fe"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816136 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816144 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816151 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816157 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816164 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816170 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816176 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816182 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816188 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.816195 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.831714 4833 scope.go:117] "RemoveContainer" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867386 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f8kjn"] Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867609 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867625 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867632 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867638 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867649 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867655 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867664 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867670 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867679 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="nbdb" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867685 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="nbdb" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867694 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867700 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867708 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-acl-logging" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867714 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-acl-logging" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867723 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="sbdb" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867729 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="sbdb" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867735 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kubecfg-setup" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867741 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kubecfg-setup" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867747 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="northd" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867753 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="northd" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867759 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867764 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867772 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-node" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867778 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-node" Jan 27 14:24:08 crc kubenswrapper[4833]: E0127 14:24:08.867786 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867792 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867881 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-acl-logging" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867890 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867898 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867906 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867918 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="nbdb" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867924 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867931 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="northd" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867939 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovn-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867947 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867954 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="sbdb" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.867963 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="kube-rbac-proxy-node" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.868146 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" containerName="ovnkube-controller" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.869768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.886609 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.907819 4833 scope.go:117] "RemoveContainer" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.923509 4833 scope.go:117] "RemoveContainer" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.935570 4833 scope.go:117] "RemoveContainer" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.946605 4833 scope.go:117] "RemoveContainer" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.957093 4833 scope.go:117] "RemoveContainer" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.968864 4833 scope.go:117] "RemoveContainer" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.980455 4833 scope.go:117] "RemoveContainer" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993523 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-ovn-kubernetes\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-log-socket\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993599 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-systemd-units\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993616 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-slash\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993652 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-config\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993668 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fpdn\" (UniqueName: \"kubernetes.io/projected/696d56dd-3ce4-489e-a258-677cf1fd8f9b-kube-api-access-6fpdn\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993682 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-bin\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993697 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-netns\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-ovn\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993711 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-slash" (OuterVolumeSpecName: "host-slash") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993745 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-env-overrides\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993798 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993844 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-etc-openvswitch\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-var-lib-openvswitch\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-systemd\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-script-lib\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993967 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-netd\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-node-log\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994026 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovn-node-metrics-cert\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994080 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-kubelet\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994102 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-openvswitch\") pod \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\" (UID: \"696d56dd-3ce4-489e-a258-677cf1fd8f9b\") " Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovnkube-script-lib\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994332 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994385 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-log-socket\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994416 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovn-node-metrics-cert\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-etc-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-systemd-units\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993872 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993899 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.993960 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994260 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994348 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994515 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994639 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994670 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-node-log" (OuterVolumeSpecName: "node-log") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994675 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-run-netns\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994720 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994801 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-systemd\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5s2\" (UniqueName: \"kubernetes.io/projected/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-kube-api-access-jx5s2\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994894 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-ovn\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-cni-bin\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.994942 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-node-log\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995039 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-cni-netd\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995078 4833 scope.go:117] "RemoveContainer" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995134 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-log-socket" (OuterVolumeSpecName: "log-socket") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995084 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-run-ovn-kubernetes\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995212 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995234 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovnkube-config\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-var-lib-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995334 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-kubelet\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-env-overrides\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-slash\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995521 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995536 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995547 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995559 4833 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995570 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995581 4833 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995592 4833 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995604 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995614 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995624 4833 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995637 4833 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995649 4833 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995659 4833 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995670 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995682 4833 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995691 4833 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.995701 4833 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:08 crc kubenswrapper[4833]: I0127 14:24:08.999726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696d56dd-3ce4-489e-a258-677cf1fd8f9b-kube-api-access-6fpdn" (OuterVolumeSpecName: "kube-api-access-6fpdn") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "kube-api-access-6fpdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.000179 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.007254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "696d56dd-3ce4-489e-a258-677cf1fd8f9b" (UID: "696d56dd-3ce4-489e-a258-677cf1fd8f9b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.007821 4833 scope.go:117] "RemoveContainer" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.008251 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": container with ID starting with a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78 not found: ID does not exist" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.008281 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} err="failed to get container status \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": rpc error: code = NotFound desc = could not find container \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": container with ID starting with a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.008301 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.008576 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": container with ID starting with 9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f not found: ID does not exist" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.008619 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} err="failed to get container status \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": rpc error: code = NotFound desc = could not find container \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": container with ID starting with 9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.008650 4833 scope.go:117] "RemoveContainer" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.008957 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": container with ID starting with d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1 not found: ID does not exist" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.008989 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} err="failed to get container status \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": rpc error: code = NotFound desc = could not find container \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": container with ID starting with d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.009007 4833 scope.go:117] "RemoveContainer" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.009346 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": container with ID starting with 57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242 not found: ID does not exist" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.009382 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} err="failed to get container status \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": rpc error: code = NotFound desc = could not find container \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": container with ID starting with 57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.009412 4833 scope.go:117] "RemoveContainer" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.009726 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": container with ID starting with 21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe not found: ID does not exist" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.009754 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} err="failed to get container status \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": rpc error: code = NotFound desc = could not find container \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": container with ID starting with 21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.009768 4833 scope.go:117] "RemoveContainer" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.010103 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": container with ID starting with 879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e not found: ID does not exist" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.010145 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} err="failed to get container status \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": rpc error: code = NotFound desc = could not find container \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": container with ID starting with 879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.010204 4833 scope.go:117] "RemoveContainer" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.010744 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": container with ID starting with 56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1 not found: ID does not exist" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.010780 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} err="failed to get container status \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": rpc error: code = NotFound desc = could not find container \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": container with ID starting with 56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.010813 4833 scope.go:117] "RemoveContainer" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.011152 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": container with ID starting with db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685 not found: ID does not exist" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.011179 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} err="failed to get container status \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": rpc error: code = NotFound desc = could not find container \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": container with ID starting with db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.011197 4833 scope.go:117] "RemoveContainer" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.011508 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": container with ID starting with f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf not found: ID does not exist" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.011542 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} err="failed to get container status \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": rpc error: code = NotFound desc = could not find container \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": container with ID starting with f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.011562 4833 scope.go:117] "RemoveContainer" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" Jan 27 14:24:09 crc kubenswrapper[4833]: E0127 14:24:09.011821 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": container with ID starting with 0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041 not found: ID does not exist" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.011846 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} err="failed to get container status \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": rpc error: code = NotFound desc = could not find container \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": container with ID starting with 0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.011860 4833 scope.go:117] "RemoveContainer" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012091 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} err="failed to get container status \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": rpc error: code = NotFound desc = could not find container \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": container with ID starting with a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012109 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012303 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} err="failed to get container status \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": rpc error: code = NotFound desc = could not find container \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": container with ID starting with 9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012337 4833 scope.go:117] "RemoveContainer" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012599 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} err="failed to get container status \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": rpc error: code = NotFound desc = could not find container \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": container with ID starting with d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012633 4833 scope.go:117] "RemoveContainer" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012855 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} err="failed to get container status \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": rpc error: code = NotFound desc = could not find container \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": container with ID starting with 57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.012877 4833 scope.go:117] "RemoveContainer" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.013203 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} err="failed to get container status \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": rpc error: code = NotFound desc = could not find container \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": container with ID starting with 21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.013235 4833 scope.go:117] "RemoveContainer" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.013699 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} err="failed to get container status \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": rpc error: code = NotFound desc = could not find container \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": container with ID starting with 879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.013727 4833 scope.go:117] "RemoveContainer" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.013969 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} err="failed to get container status \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": rpc error: code = NotFound desc = could not find container \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": container with ID starting with 56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.013991 4833 scope.go:117] "RemoveContainer" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.014281 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} err="failed to get container status \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": rpc error: code = NotFound desc = could not find container \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": container with ID starting with db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.014300 4833 scope.go:117] "RemoveContainer" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.014545 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} err="failed to get container status \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": rpc error: code = NotFound desc = could not find container \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": container with ID starting with f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.014572 4833 scope.go:117] "RemoveContainer" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.014777 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} err="failed to get container status \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": rpc error: code = NotFound desc = could not find container \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": container with ID starting with 0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.014806 4833 scope.go:117] "RemoveContainer" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015067 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} err="failed to get container status \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": rpc error: code = NotFound desc = could not find container \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": container with ID starting with a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015092 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015287 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} err="failed to get container status \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": rpc error: code = NotFound desc = could not find container \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": container with ID starting with 9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015304 4833 scope.go:117] "RemoveContainer" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015615 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} err="failed to get container status \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": rpc error: code = NotFound desc = could not find container \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": container with ID starting with d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015641 4833 scope.go:117] "RemoveContainer" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015879 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} err="failed to get container status \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": rpc error: code = NotFound desc = could not find container \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": container with ID starting with 57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.015912 4833 scope.go:117] "RemoveContainer" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016160 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} err="failed to get container status \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": rpc error: code = NotFound desc = could not find container \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": container with ID starting with 21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016178 4833 scope.go:117] "RemoveContainer" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016409 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} err="failed to get container status \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": rpc error: code = NotFound desc = could not find container \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": container with ID starting with 879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016426 4833 scope.go:117] "RemoveContainer" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016669 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} err="failed to get container status \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": rpc error: code = NotFound desc = could not find container \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": container with ID starting with 56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016695 4833 scope.go:117] "RemoveContainer" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.016996 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} err="failed to get container status \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": rpc error: code = NotFound desc = could not find container \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": container with ID starting with db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017019 4833 scope.go:117] "RemoveContainer" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017306 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} err="failed to get container status \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": rpc error: code = NotFound desc = could not find container \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": container with ID starting with f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017332 4833 scope.go:117] "RemoveContainer" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017560 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} err="failed to get container status \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": rpc error: code = NotFound desc = could not find container \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": container with ID starting with 0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017586 4833 scope.go:117] "RemoveContainer" containerID="a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017852 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78"} err="failed to get container status \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": rpc error: code = NotFound desc = could not find container \"a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78\": container with ID starting with a1099783be5c2f5afd82a8d25cc95fe660463ec8c5db7d214aba44f6a2e88b78 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.017876 4833 scope.go:117] "RemoveContainer" containerID="9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.018175 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f"} err="failed to get container status \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": rpc error: code = NotFound desc = could not find container \"9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f\": container with ID starting with 9617b7053a4e9af0c05c4471663a4d3c91b382c3dc72f5d4ff9c531510ec530f not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.018198 4833 scope.go:117] "RemoveContainer" containerID="d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.018439 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1"} err="failed to get container status \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": rpc error: code = NotFound desc = could not find container \"d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1\": container with ID starting with d507a48de2df8fe159c4c3ec60a1859a2e9e42db7f5bc8d77e2c457e97150df1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.018477 4833 scope.go:117] "RemoveContainer" containerID="57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.018709 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242"} err="failed to get container status \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": rpc error: code = NotFound desc = could not find container \"57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242\": container with ID starting with 57835ad8ad278647be41c15d851815989f935cdd119023df13cc2e4a857b6242 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.018743 4833 scope.go:117] "RemoveContainer" containerID="21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.019110 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe"} err="failed to get container status \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": rpc error: code = NotFound desc = could not find container \"21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe\": container with ID starting with 21d80c6878fa7ed7757b8754eb17f04a35b4bf3bfd26659e4b59c482d95422fe not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.019134 4833 scope.go:117] "RemoveContainer" containerID="879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.019389 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e"} err="failed to get container status \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": rpc error: code = NotFound desc = could not find container \"879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e\": container with ID starting with 879f5b4fa2e5f51ad43fb5f145128ba654188d2045c7fa27126e91eaecb8712e not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.019411 4833 scope.go:117] "RemoveContainer" containerID="56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.019822 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1"} err="failed to get container status \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": rpc error: code = NotFound desc = could not find container \"56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1\": container with ID starting with 56d1ed3b1b48589be5325b5f0cbc98d3fe9de04dac42376c996c44cd9f2aafa1 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.019845 4833 scope.go:117] "RemoveContainer" containerID="db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.020074 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685"} err="failed to get container status \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": rpc error: code = NotFound desc = could not find container \"db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685\": container with ID starting with db65e6d3e8a10349a541f555d5419a056663ff0d56c3b6fcebbff9f5781ce685 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.020093 4833 scope.go:117] "RemoveContainer" containerID="f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.020357 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf"} err="failed to get container status \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": rpc error: code = NotFound desc = could not find container \"f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf\": container with ID starting with f882d6c883fffd63cd959445e9288a9f8310176066e4fe09f8da216962d9dabf not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.020381 4833 scope.go:117] "RemoveContainer" containerID="0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.020713 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041"} err="failed to get container status \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": rpc error: code = NotFound desc = could not find container \"0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041\": container with ID starting with 0a5462fc04a8755f3b55c2bd9b39d9b45230b8a2d7246bc5f2896dd777389041 not found: ID does not exist" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-slash\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovnkube-script-lib\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098151 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098185 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-log-socket\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098208 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovn-node-metrics-cert\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-etc-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-systemd-units\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-run-netns\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098286 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-systemd\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx5s2\" (UniqueName: \"kubernetes.io/projected/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-kube-api-access-jx5s2\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098322 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-ovn\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098336 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-cni-bin\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-node-log\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-cni-netd\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-run-ovn-kubernetes\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098413 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovnkube-config\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-var-lib-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-kubelet\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098491 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-env-overrides\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098524 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fpdn\" (UniqueName: \"kubernetes.io/projected/696d56dd-3ce4-489e-a258-677cf1fd8f9b-kube-api-access-6fpdn\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098534 4833 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/696d56dd-3ce4-489e-a258-677cf1fd8f9b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.098546 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/696d56dd-3ce4-489e-a258-677cf1fd8f9b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-env-overrides\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099101 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-slash\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099531 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovnkube-script-lib\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099562 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099585 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-cni-bin\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099605 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-node-log\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099627 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-cni-netd\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099647 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-run-ovn-kubernetes\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-var-lib-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-kubelet\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-systemd-units\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-ovn\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-systemd\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099754 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-host-run-netns\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-etc-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-log-socket\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.099852 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-run-openvswitch\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.100179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovnkube-config\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.102261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-ovn-node-metrics-cert\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.117861 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx5s2\" (UniqueName: \"kubernetes.io/projected/87cc84b2-aa57-43c1-8b2e-0e5c1636bce4-kube-api-access-jx5s2\") pod \"ovnkube-node-f8kjn\" (UID: \"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4\") " pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.144640 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jpt5h"] Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.148890 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jpt5h"] Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.194103 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.219889 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696d56dd-3ce4-489e-a258-677cf1fd8f9b" path="/var/lib/kubelet/pods/696d56dd-3ce4-489e-a258-677cf1fd8f9b/volumes" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.429992 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-gwjhk" Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.825652 4833 generic.go:334] "Generic (PLEG): container finished" podID="87cc84b2-aa57-43c1-8b2e-0e5c1636bce4" containerID="01fea6660cf99d9aa657d9beff30e21e33348b1e2315f0767578cf1098d3846d" exitCode=0 Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.825912 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerDied","Data":"01fea6660cf99d9aa657d9beff30e21e33348b1e2315f0767578cf1098d3846d"} Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.825970 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"7c6186be399f723881f8114035abeae8deaf2750e55f6750b822441378f14d84"} Jan 27 14:24:09 crc kubenswrapper[4833]: I0127 14:24:09.829341 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/2.log" Jan 27 14:24:10 crc kubenswrapper[4833]: I0127 14:24:10.838248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"e8481cd1ae4bd958cb2ea0a1ef0894baa97a516e96c68320b99a544e4a12e004"} Jan 27 14:24:10 crc kubenswrapper[4833]: I0127 14:24:10.838956 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"d477d0cc5919f8d45287486830dcece6e90673312dfe291a766a00d707db1524"} Jan 27 14:24:10 crc kubenswrapper[4833]: I0127 14:24:10.838967 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"6f72d7c29d1a923fbb40228791ede579117669778326028c97885c598d95ba25"} Jan 27 14:24:10 crc kubenswrapper[4833]: I0127 14:24:10.838975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"bcb8f0109430bff75333c300b83ac1edd0cca83f4bd9b751b6a5a572bf1e7f49"} Jan 27 14:24:10 crc kubenswrapper[4833]: I0127 14:24:10.838984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"7489ba90bebb5ccdb5e2ec93ca4cb0a3195903f281985c9e096689f681799ade"} Jan 27 14:24:10 crc kubenswrapper[4833]: I0127 14:24:10.838993 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"f36f258c9660ff55f06d0843908562539aa5532829981eaf20b71dddebe0eba5"} Jan 27 14:24:14 crc kubenswrapper[4833]: I0127 14:24:14.334132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"9a007333cacd1f1cba15a3b61bcb2bc16b674cb1a3e67c7c2fc4fd668cbb29bd"} Jan 27 14:24:15 crc kubenswrapper[4833]: I0127 14:24:15.343233 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" event={"ID":"87cc84b2-aa57-43c1-8b2e-0e5c1636bce4","Type":"ContainerStarted","Data":"13764f1c3380a9e1e5f46866702bd43e79e2b80e0914871f937614629e32abca"} Jan 27 14:24:15 crc kubenswrapper[4833]: I0127 14:24:15.343786 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:15 crc kubenswrapper[4833]: I0127 14:24:15.378114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:15 crc kubenswrapper[4833]: I0127 14:24:15.388407 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" podStartSLOduration=7.388391521 podStartE2EDuration="7.388391521s" podCreationTimestamp="2026-01-27 14:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:24:15.385516091 +0000 UTC m=+757.036840503" watchObservedRunningTime="2026-01-27 14:24:15.388391521 +0000 UTC m=+757.039715923" Jan 27 14:24:16 crc kubenswrapper[4833]: I0127 14:24:16.348924 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:16 crc kubenswrapper[4833]: I0127 14:24:16.349254 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:16 crc kubenswrapper[4833]: I0127 14:24:16.376818 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:20 crc kubenswrapper[4833]: I0127 14:24:20.210845 4833 scope.go:117] "RemoveContainer" containerID="3ff19209ce0ef90cfad465697fd3b41d240f32ef7b2d01dd3d720eaed3f27367" Jan 27 14:24:21 crc kubenswrapper[4833]: I0127 14:24:21.383190 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-npb46_b7a7c135-ca95-4e75-b823-d1e45101a761/kube-multus/2.log" Jan 27 14:24:21 crc kubenswrapper[4833]: I0127 14:24:21.385099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-npb46" event={"ID":"b7a7c135-ca95-4e75-b823-d1e45101a761","Type":"ContainerStarted","Data":"0dc6e7f44d6af17e65b15af501accc96cf399a2d7cda616fb87293106c10e8cf"} Jan 27 14:24:31 crc kubenswrapper[4833]: I0127 14:24:31.832108 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:24:32 crc kubenswrapper[4833]: I0127 14:24:32.260736 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:24:32 crc kubenswrapper[4833]: I0127 14:24:32.260820 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.593773 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v"] Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.596188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.597992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.619747 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v"] Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.710559 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.710949 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkfr\" (UniqueName: \"kubernetes.io/projected/019c9be7-9fb3-48c8-97e6-fe7463d16b34-kube-api-access-kdkfr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.710986 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.812210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.812305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkfr\" (UniqueName: \"kubernetes.io/projected/019c9be7-9fb3-48c8-97e6-fe7463d16b34-kube-api-access-kdkfr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.812354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.812848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.813048 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.844264 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkfr\" (UniqueName: \"kubernetes.io/projected/019c9be7-9fb3-48c8-97e6-fe7463d16b34-kube-api-access-kdkfr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:38 crc kubenswrapper[4833]: I0127 14:24:38.918387 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:39 crc kubenswrapper[4833]: I0127 14:24:39.173266 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v"] Jan 27 14:24:39 crc kubenswrapper[4833]: I0127 14:24:39.225794 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f8kjn" Jan 27 14:24:39 crc kubenswrapper[4833]: I0127 14:24:39.505285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" event={"ID":"019c9be7-9fb3-48c8-97e6-fe7463d16b34","Type":"ContainerStarted","Data":"70effdcb6916e4a6aac57251076adc04fdeff2a5544a99ba2e5f4cf1b5341c90"} Jan 27 14:24:39 crc kubenswrapper[4833]: I0127 14:24:39.505353 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" event={"ID":"019c9be7-9fb3-48c8-97e6-fe7463d16b34","Type":"ContainerStarted","Data":"18872d06e838277940caac246dae789387b4ee594e4acf722fa441387af3a1f1"} Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.514828 4833 generic.go:334] "Generic (PLEG): container finished" podID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerID="70effdcb6916e4a6aac57251076adc04fdeff2a5544a99ba2e5f4cf1b5341c90" exitCode=0 Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.514887 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" event={"ID":"019c9be7-9fb3-48c8-97e6-fe7463d16b34","Type":"ContainerDied","Data":"70effdcb6916e4a6aac57251076adc04fdeff2a5544a99ba2e5f4cf1b5341c90"} Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.914253 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvxc7"] Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.917235 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.928935 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvxc7"] Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.947380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-utilities\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.947573 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckl89\" (UniqueName: \"kubernetes.io/projected/7caa2894-15d3-4e24-93e2-27fc607b0362-kube-api-access-ckl89\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:40 crc kubenswrapper[4833]: I0127 14:24:40.947651 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-catalog-content\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.049316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckl89\" (UniqueName: \"kubernetes.io/projected/7caa2894-15d3-4e24-93e2-27fc607b0362-kube-api-access-ckl89\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.049618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-catalog-content\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.049744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-utilities\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.050226 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-utilities\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.050303 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-catalog-content\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.069317 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckl89\" (UniqueName: \"kubernetes.io/projected/7caa2894-15d3-4e24-93e2-27fc607b0362-kube-api-access-ckl89\") pod \"redhat-operators-rvxc7\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.243172 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.432148 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvxc7"] Jan 27 14:24:41 crc kubenswrapper[4833]: I0127 14:24:41.524180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerStarted","Data":"29033dd71349145ddc2b1530dc78ffb9c0077700fdde9d6ce5c594f7af3ef756"} Jan 27 14:24:42 crc kubenswrapper[4833]: I0127 14:24:42.537292 4833 generic.go:334] "Generic (PLEG): container finished" podID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerID="0c27b72ad56c524492e2683aed49c706d0c5fa23bf676831077511fe0dbe46a8" exitCode=0 Jan 27 14:24:42 crc kubenswrapper[4833]: I0127 14:24:42.537359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" event={"ID":"019c9be7-9fb3-48c8-97e6-fe7463d16b34","Type":"ContainerDied","Data":"0c27b72ad56c524492e2683aed49c706d0c5fa23bf676831077511fe0dbe46a8"} Jan 27 14:24:42 crc kubenswrapper[4833]: I0127 14:24:42.538958 4833 generic.go:334] "Generic (PLEG): container finished" podID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerID="8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995" exitCode=0 Jan 27 14:24:42 crc kubenswrapper[4833]: I0127 14:24:42.538979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerDied","Data":"8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995"} Jan 27 14:24:43 crc kubenswrapper[4833]: I0127 14:24:43.547093 4833 generic.go:334] "Generic (PLEG): container finished" podID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerID="89522f0a0d37929cddb323328992347815f6d0aa0eb48c7270b3b46b9082dfe7" exitCode=0 Jan 27 14:24:43 crc kubenswrapper[4833]: I0127 14:24:43.547180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" event={"ID":"019c9be7-9fb3-48c8-97e6-fe7463d16b34","Type":"ContainerDied","Data":"89522f0a0d37929cddb323328992347815f6d0aa0eb48c7270b3b46b9082dfe7"} Jan 27 14:24:43 crc kubenswrapper[4833]: I0127 14:24:43.550303 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerStarted","Data":"1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b"} Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.562270 4833 generic.go:334] "Generic (PLEG): container finished" podID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerID="1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b" exitCode=0 Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.562396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerDied","Data":"1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b"} Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.838329 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.904656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdkfr\" (UniqueName: \"kubernetes.io/projected/019c9be7-9fb3-48c8-97e6-fe7463d16b34-kube-api-access-kdkfr\") pod \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.904837 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-bundle\") pod \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.904871 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-util\") pod \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\" (UID: \"019c9be7-9fb3-48c8-97e6-fe7463d16b34\") " Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.909806 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-bundle" (OuterVolumeSpecName: "bundle") pod "019c9be7-9fb3-48c8-97e6-fe7463d16b34" (UID: "019c9be7-9fb3-48c8-97e6-fe7463d16b34"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:24:44 crc kubenswrapper[4833]: I0127 14:24:44.911634 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019c9be7-9fb3-48c8-97e6-fe7463d16b34-kube-api-access-kdkfr" (OuterVolumeSpecName: "kube-api-access-kdkfr") pod "019c9be7-9fb3-48c8-97e6-fe7463d16b34" (UID: "019c9be7-9fb3-48c8-97e6-fe7463d16b34"). InnerVolumeSpecName "kube-api-access-kdkfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.006430 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdkfr\" (UniqueName: \"kubernetes.io/projected/019c9be7-9fb3-48c8-97e6-fe7463d16b34-kube-api-access-kdkfr\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.006866 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.204628 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-util" (OuterVolumeSpecName: "util") pod "019c9be7-9fb3-48c8-97e6-fe7463d16b34" (UID: "019c9be7-9fb3-48c8-97e6-fe7463d16b34"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.214043 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/019c9be7-9fb3-48c8-97e6-fe7463d16b34-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.572804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerStarted","Data":"a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24"} Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.575323 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" event={"ID":"019c9be7-9fb3-48c8-97e6-fe7463d16b34","Type":"ContainerDied","Data":"18872d06e838277940caac246dae789387b4ee594e4acf722fa441387af3a1f1"} Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.575366 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18872d06e838277940caac246dae789387b4ee594e4acf722fa441387af3a1f1" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.575425 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v" Jan 27 14:24:45 crc kubenswrapper[4833]: I0127 14:24:45.594922 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvxc7" podStartSLOduration=2.854638538 podStartE2EDuration="5.594905588s" podCreationTimestamp="2026-01-27 14:24:40 +0000 UTC" firstStartedPulling="2026-01-27 14:24:42.540236478 +0000 UTC m=+784.191560880" lastFinishedPulling="2026-01-27 14:24:45.280503528 +0000 UTC m=+786.931827930" observedRunningTime="2026-01-27 14:24:45.59375913 +0000 UTC m=+787.245083542" watchObservedRunningTime="2026-01-27 14:24:45.594905588 +0000 UTC m=+787.246230000" Jan 27 14:24:51 crc kubenswrapper[4833]: I0127 14:24:51.243587 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:51 crc kubenswrapper[4833]: I0127 14:24:51.244043 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:24:52 crc kubenswrapper[4833]: I0127 14:24:52.295998 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvxc7" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="registry-server" probeResult="failure" output=< Jan 27 14:24:52 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:24:52 crc kubenswrapper[4833]: > Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.194955 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj"] Jan 27 14:24:56 crc kubenswrapper[4833]: E0127 14:24:56.196151 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="extract" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.196222 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="extract" Jan 27 14:24:56 crc kubenswrapper[4833]: E0127 14:24:56.196274 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="util" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.196319 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="util" Jan 27 14:24:56 crc kubenswrapper[4833]: E0127 14:24:56.196372 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="pull" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.196416 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="pull" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.196570 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="019c9be7-9fb3-48c8-97e6-fe7463d16b34" containerName="extract" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.196972 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.198673 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-dd6hj" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.199071 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.199217 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.214184 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.246871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgcn\" (UniqueName: \"kubernetes.io/projected/4c00dc67-8748-49eb-ae3c-8ffeb7bbab98-kube-api-access-5xgcn\") pod \"obo-prometheus-operator-68bc856cb9-bhffj\" (UID: \"4c00dc67-8748-49eb-ae3c-8ffeb7bbab98\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.319078 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.320111 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.321737 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-cbmc7" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.322280 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.331040 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.331671 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.340427 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.348541 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgcn\" (UniqueName: \"kubernetes.io/projected/4c00dc67-8748-49eb-ae3c-8ffeb7bbab98-kube-api-access-5xgcn\") pod \"obo-prometheus-operator-68bc856cb9-bhffj\" (UID: \"4c00dc67-8748-49eb-ae3c-8ffeb7bbab98\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.364504 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.371693 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgcn\" (UniqueName: \"kubernetes.io/projected/4c00dc67-8748-49eb-ae3c-8ffeb7bbab98-kube-api-access-5xgcn\") pod \"obo-prometheus-operator-68bc856cb9-bhffj\" (UID: \"4c00dc67-8748-49eb-ae3c-8ffeb7bbab98\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.449883 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86100396-5b44-496c-b05b-b39fbe052fa8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-7jp89\" (UID: \"86100396-5b44-496c-b05b-b39fbe052fa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.449958 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29b2f4e4-a61f-4173-9297-ef6d1d46330a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv\" (UID: \"29b2f4e4-a61f-4173-9297-ef6d1d46330a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.450058 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86100396-5b44-496c-b05b-b39fbe052fa8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-7jp89\" (UID: \"86100396-5b44-496c-b05b-b39fbe052fa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.450101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29b2f4e4-a61f-4173-9297-ef6d1d46330a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv\" (UID: \"29b2f4e4-a61f-4173-9297-ef6d1d46330a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.516571 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.550879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86100396-5b44-496c-b05b-b39fbe052fa8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-7jp89\" (UID: \"86100396-5b44-496c-b05b-b39fbe052fa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.550936 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29b2f4e4-a61f-4173-9297-ef6d1d46330a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv\" (UID: \"29b2f4e4-a61f-4173-9297-ef6d1d46330a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.550974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86100396-5b44-496c-b05b-b39fbe052fa8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-7jp89\" (UID: \"86100396-5b44-496c-b05b-b39fbe052fa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.550994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29b2f4e4-a61f-4173-9297-ef6d1d46330a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv\" (UID: \"29b2f4e4-a61f-4173-9297-ef6d1d46330a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.560501 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/86100396-5b44-496c-b05b-b39fbe052fa8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-7jp89\" (UID: \"86100396-5b44-496c-b05b-b39fbe052fa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.560979 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29b2f4e4-a61f-4173-9297-ef6d1d46330a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv\" (UID: \"29b2f4e4-a61f-4173-9297-ef6d1d46330a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.567947 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29b2f4e4-a61f-4173-9297-ef6d1d46330a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv\" (UID: \"29b2f4e4-a61f-4173-9297-ef6d1d46330a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.570061 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/86100396-5b44-496c-b05b-b39fbe052fa8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6f86595b7-7jp89\" (UID: \"86100396-5b44-496c-b05b-b39fbe052fa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.590942 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wp7pq"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.591826 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.599698 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-brxkw" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.599930 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.614643 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wp7pq"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.637777 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.647710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.651616 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wp7pq\" (UID: \"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3\") " pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.651678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s98jb\" (UniqueName: \"kubernetes.io/projected/c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3-kube-api-access-s98jb\") pod \"observability-operator-59bdc8b94-wp7pq\" (UID: \"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3\") " pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.736404 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7hv4r"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.737713 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.741305 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4dv9z" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.748546 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7hv4r"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.752797 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wp7pq\" (UID: \"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3\") " pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.752875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s98jb\" (UniqueName: \"kubernetes.io/projected/c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3-kube-api-access-s98jb\") pod \"observability-operator-59bdc8b94-wp7pq\" (UID: \"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3\") " pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.769204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wp7pq\" (UID: \"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3\") " pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.779354 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s98jb\" (UniqueName: \"kubernetes.io/projected/c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3-kube-api-access-s98jb\") pod \"observability-operator-59bdc8b94-wp7pq\" (UID: \"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3\") " pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.855771 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktfw\" (UniqueName: \"kubernetes.io/projected/2302ff6f-4d3a-4517-a8e7-4de46e9456c1-kube-api-access-8ktfw\") pod \"perses-operator-5bf474d74f-7hv4r\" (UID: \"2302ff6f-4d3a-4517-a8e7-4de46e9456c1\") " pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.855861 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2302ff6f-4d3a-4517-a8e7-4de46e9456c1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7hv4r\" (UID: \"2302ff6f-4d3a-4517-a8e7-4de46e9456c1\") " pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:56 crc kubenswrapper[4833]: W0127 14:24:56.868059 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c00dc67_8748_49eb_ae3c_8ffeb7bbab98.slice/crio-7e6641af6a3372efe58bb7325a3e3e2dff96062d12b8120d9ebf9609a6d2e9d5 WatchSource:0}: Error finding container 7e6641af6a3372efe58bb7325a3e3e2dff96062d12b8120d9ebf9609a6d2e9d5: Status 404 returned error can't find the container with id 7e6641af6a3372efe58bb7325a3e3e2dff96062d12b8120d9ebf9609a6d2e9d5 Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.870281 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.923044 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.957259 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktfw\" (UniqueName: \"kubernetes.io/projected/2302ff6f-4d3a-4517-a8e7-4de46e9456c1-kube-api-access-8ktfw\") pod \"perses-operator-5bf474d74f-7hv4r\" (UID: \"2302ff6f-4d3a-4517-a8e7-4de46e9456c1\") " pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.957336 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2302ff6f-4d3a-4517-a8e7-4de46e9456c1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7hv4r\" (UID: \"2302ff6f-4d3a-4517-a8e7-4de46e9456c1\") " pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.958168 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2302ff6f-4d3a-4517-a8e7-4de46e9456c1-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7hv4r\" (UID: \"2302ff6f-4d3a-4517-a8e7-4de46e9456c1\") " pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.959424 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv"] Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.966084 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89"] Jan 27 14:24:56 crc kubenswrapper[4833]: W0127 14:24:56.972242 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29b2f4e4_a61f_4173_9297_ef6d1d46330a.slice/crio-0efb7ba54f3e5040e6c42f20c8803114248845481a34e3bf08a29650b84a1acc WatchSource:0}: Error finding container 0efb7ba54f3e5040e6c42f20c8803114248845481a34e3bf08a29650b84a1acc: Status 404 returned error can't find the container with id 0efb7ba54f3e5040e6c42f20c8803114248845481a34e3bf08a29650b84a1acc Jan 27 14:24:56 crc kubenswrapper[4833]: I0127 14:24:56.976224 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktfw\" (UniqueName: \"kubernetes.io/projected/2302ff6f-4d3a-4517-a8e7-4de46e9456c1-kube-api-access-8ktfw\") pod \"perses-operator-5bf474d74f-7hv4r\" (UID: \"2302ff6f-4d3a-4517-a8e7-4de46e9456c1\") " pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.060747 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.314641 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7hv4r"] Jan 27 14:24:57 crc kubenswrapper[4833]: W0127 14:24:57.317249 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2302ff6f_4d3a_4517_a8e7_4de46e9456c1.slice/crio-9b469ce9ada277704d3a68efa0fad34b65793d915c40cebd9b0ded9f9c653764 WatchSource:0}: Error finding container 9b469ce9ada277704d3a68efa0fad34b65793d915c40cebd9b0ded9f9c653764: Status 404 returned error can't find the container with id 9b469ce9ada277704d3a68efa0fad34b65793d915c40cebd9b0ded9f9c653764 Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.385936 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wp7pq"] Jan 27 14:24:57 crc kubenswrapper[4833]: W0127 14:24:57.395532 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc15ae3a9_dcc0_4ce5_b3ff_9e4bd72b09a3.slice/crio-267b7a754c1debc6063d001e2470f3ec65eb082244e80af67316c937cb5cc591 WatchSource:0}: Error finding container 267b7a754c1debc6063d001e2470f3ec65eb082244e80af67316c937cb5cc591: Status 404 returned error can't find the container with id 267b7a754c1debc6063d001e2470f3ec65eb082244e80af67316c937cb5cc591 Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.650728 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" event={"ID":"29b2f4e4-a61f-4173-9297-ef6d1d46330a","Type":"ContainerStarted","Data":"0efb7ba54f3e5040e6c42f20c8803114248845481a34e3bf08a29650b84a1acc"} Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.651577 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" event={"ID":"4c00dc67-8748-49eb-ae3c-8ffeb7bbab98","Type":"ContainerStarted","Data":"7e6641af6a3372efe58bb7325a3e3e2dff96062d12b8120d9ebf9609a6d2e9d5"} Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.652784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" event={"ID":"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3","Type":"ContainerStarted","Data":"267b7a754c1debc6063d001e2470f3ec65eb082244e80af67316c937cb5cc591"} Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.653738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" event={"ID":"86100396-5b44-496c-b05b-b39fbe052fa8","Type":"ContainerStarted","Data":"32886918d7cb64f7f0564d87d8dd47e6217b587938d350f8fffe177889b7ca38"} Jan 27 14:24:57 crc kubenswrapper[4833]: I0127 14:24:57.654663 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" event={"ID":"2302ff6f-4d3a-4517-a8e7-4de46e9456c1","Type":"ContainerStarted","Data":"9b469ce9ada277704d3a68efa0fad34b65793d915c40cebd9b0ded9f9c653764"} Jan 27 14:25:01 crc kubenswrapper[4833]: I0127 14:25:01.287977 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:25:01 crc kubenswrapper[4833]: I0127 14:25:01.341039 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:25:01 crc kubenswrapper[4833]: I0127 14:25:01.515251 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvxc7"] Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.261963 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.262018 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.262068 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.262612 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a75584c2330835076c24d4403b17ee7a9367704e37be7e4c860bbe6180771fd"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.262664 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://8a75584c2330835076c24d4403b17ee7a9367704e37be7e4c860bbe6180771fd" gracePeriod=600 Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.686811 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="8a75584c2330835076c24d4403b17ee7a9367704e37be7e4c860bbe6180771fd" exitCode=0 Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.686921 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"8a75584c2330835076c24d4403b17ee7a9367704e37be7e4c860bbe6180771fd"} Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.686959 4833 scope.go:117] "RemoveContainer" containerID="46304200ef48984ab84f4b4a5c14acbe072f1585d66ca4c1d6298873b2c4d044" Jan 27 14:25:02 crc kubenswrapper[4833]: I0127 14:25:02.686987 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvxc7" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="registry-server" containerID="cri-o://a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24" gracePeriod=2 Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.469358 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.558619 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-catalog-content\") pod \"7caa2894-15d3-4e24-93e2-27fc607b0362\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.558732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckl89\" (UniqueName: \"kubernetes.io/projected/7caa2894-15d3-4e24-93e2-27fc607b0362-kube-api-access-ckl89\") pod \"7caa2894-15d3-4e24-93e2-27fc607b0362\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.558860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-utilities\") pod \"7caa2894-15d3-4e24-93e2-27fc607b0362\" (UID: \"7caa2894-15d3-4e24-93e2-27fc607b0362\") " Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.560970 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-utilities" (OuterVolumeSpecName: "utilities") pod "7caa2894-15d3-4e24-93e2-27fc607b0362" (UID: "7caa2894-15d3-4e24-93e2-27fc607b0362"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.569832 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7caa2894-15d3-4e24-93e2-27fc607b0362-kube-api-access-ckl89" (OuterVolumeSpecName: "kube-api-access-ckl89") pod "7caa2894-15d3-4e24-93e2-27fc607b0362" (UID: "7caa2894-15d3-4e24-93e2-27fc607b0362"). InnerVolumeSpecName "kube-api-access-ckl89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.659787 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.659838 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckl89\" (UniqueName: \"kubernetes.io/projected/7caa2894-15d3-4e24-93e2-27fc607b0362-kube-api-access-ckl89\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.704702 4833 generic.go:334] "Generic (PLEG): container finished" podID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerID="a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24" exitCode=0 Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.704728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7caa2894-15d3-4e24-93e2-27fc607b0362" (UID: "7caa2894-15d3-4e24-93e2-27fc607b0362"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.704771 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvxc7" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.704784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerDied","Data":"a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24"} Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.704847 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxc7" event={"ID":"7caa2894-15d3-4e24-93e2-27fc607b0362","Type":"ContainerDied","Data":"29033dd71349145ddc2b1530dc78ffb9c0077700fdde9d6ce5c594f7af3ef756"} Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.704898 4833 scope.go:117] "RemoveContainer" containerID="a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.708617 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" event={"ID":"4c00dc67-8748-49eb-ae3c-8ffeb7bbab98","Type":"ContainerStarted","Data":"2fc1d4a3e4d18c5060b4d7fceb8915c3485c89974251adb77dc7700024f4cd21"} Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.711548 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"40187fec1df7ede27c8027d4dc094283cfd4a32e465d547b8f9dfc102b7b849f"} Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.713864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" event={"ID":"86100396-5b44-496c-b05b-b39fbe052fa8","Type":"ContainerStarted","Data":"d2de19f694fdee5e12578e30b5382468e102f3d34bb7f468a6520be61f280dfb"} Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.718780 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" event={"ID":"29b2f4e4-a61f-4173-9297-ef6d1d46330a","Type":"ContainerStarted","Data":"62c7cf94be05611de90e4c70ebe03abe13fcb2b52e3e1eeb5115a449d3bca0b0"} Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.729229 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bhffj" podStartSLOduration=1.408540522 podStartE2EDuration="7.729205815s" podCreationTimestamp="2026-01-27 14:24:56 +0000 UTC" firstStartedPulling="2026-01-27 14:24:56.875802652 +0000 UTC m=+798.527127054" lastFinishedPulling="2026-01-27 14:25:03.196467945 +0000 UTC m=+804.847792347" observedRunningTime="2026-01-27 14:25:03.723458593 +0000 UTC m=+805.374782995" watchObservedRunningTime="2026-01-27 14:25:03.729205815 +0000 UTC m=+805.380530217" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.734490 4833 scope.go:117] "RemoveContainer" containerID="1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.744364 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-7jp89" podStartSLOduration=1.5628345559999999 podStartE2EDuration="7.744350031s" podCreationTimestamp="2026-01-27 14:24:56 +0000 UTC" firstStartedPulling="2026-01-27 14:24:56.996828471 +0000 UTC m=+798.648152873" lastFinishedPulling="2026-01-27 14:25:03.178343946 +0000 UTC m=+804.829668348" observedRunningTime="2026-01-27 14:25:03.742318601 +0000 UTC m=+805.393643003" watchObservedRunningTime="2026-01-27 14:25:03.744350031 +0000 UTC m=+805.395674433" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.766241 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvxc7"] Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.771604 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7caa2894-15d3-4e24-93e2-27fc607b0362-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.774423 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvxc7"] Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.786384 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv" podStartSLOduration=1.570704762 podStartE2EDuration="7.786362832s" podCreationTimestamp="2026-01-27 14:24:56 +0000 UTC" firstStartedPulling="2026-01-27 14:24:56.980486836 +0000 UTC m=+798.631811238" lastFinishedPulling="2026-01-27 14:25:03.196144906 +0000 UTC m=+804.847469308" observedRunningTime="2026-01-27 14:25:03.785973912 +0000 UTC m=+805.437298314" watchObservedRunningTime="2026-01-27 14:25:03.786362832 +0000 UTC m=+805.437687234" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.788288 4833 scope.go:117] "RemoveContainer" containerID="8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.826000 4833 scope.go:117] "RemoveContainer" containerID="a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24" Jan 27 14:25:03 crc kubenswrapper[4833]: E0127 14:25:03.827329 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24\": container with ID starting with a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24 not found: ID does not exist" containerID="a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.827372 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24"} err="failed to get container status \"a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24\": rpc error: code = NotFound desc = could not find container \"a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24\": container with ID starting with a276e665bd87510cd867c0b2c69a3a4a2443229418282bcbc219f5a61a76ed24 not found: ID does not exist" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.827401 4833 scope.go:117] "RemoveContainer" containerID="1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b" Jan 27 14:25:03 crc kubenswrapper[4833]: E0127 14:25:03.827795 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b\": container with ID starting with 1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b not found: ID does not exist" containerID="1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.827817 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b"} err="failed to get container status \"1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b\": rpc error: code = NotFound desc = could not find container \"1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b\": container with ID starting with 1785be8a46231fbecc80958c7cfec821155b75190fff472728d106791677721b not found: ID does not exist" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.827833 4833 scope.go:117] "RemoveContainer" containerID="8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995" Jan 27 14:25:03 crc kubenswrapper[4833]: E0127 14:25:03.829733 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995\": container with ID starting with 8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995 not found: ID does not exist" containerID="8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995" Jan 27 14:25:03 crc kubenswrapper[4833]: I0127 14:25:03.829760 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995"} err="failed to get container status \"8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995\": rpc error: code = NotFound desc = could not find container \"8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995\": container with ID starting with 8c458ad22de64ddf2a0cb1943bf4d7283bd310af7553a17a429f86621e84a995 not found: ID does not exist" Jan 27 14:25:05 crc kubenswrapper[4833]: I0127 14:25:05.217645 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" path="/var/lib/kubelet/pods/7caa2894-15d3-4e24-93e2-27fc607b0362/volumes" Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.746918 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" event={"ID":"2302ff6f-4d3a-4517-a8e7-4de46e9456c1","Type":"ContainerStarted","Data":"66a21e2b5560d4120ab49ef39483d47b94871c2f8d00c7b843307ae83a0396af"} Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.747195 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.748153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" event={"ID":"c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3","Type":"ContainerStarted","Data":"9a0e2e4624220203bb2ce3bf6807c5362e6b21f08de26e4d4ffbd3feb6880054"} Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.748544 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.767463 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" podStartSLOduration=1.855347334 podStartE2EDuration="11.76743043s" podCreationTimestamp="2026-01-27 14:24:56 +0000 UTC" firstStartedPulling="2026-01-27 14:24:57.319995159 +0000 UTC m=+798.971319561" lastFinishedPulling="2026-01-27 14:25:07.232078245 +0000 UTC m=+808.883402657" observedRunningTime="2026-01-27 14:25:07.765080432 +0000 UTC m=+809.416404834" watchObservedRunningTime="2026-01-27 14:25:07.76743043 +0000 UTC m=+809.418754832" Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.785666 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" podStartSLOduration=1.9329176160000001 podStartE2EDuration="11.785648081s" podCreationTimestamp="2026-01-27 14:24:56 +0000 UTC" firstStartedPulling="2026-01-27 14:24:57.39793076 +0000 UTC m=+799.049255162" lastFinishedPulling="2026-01-27 14:25:07.250661225 +0000 UTC m=+808.901985627" observedRunningTime="2026-01-27 14:25:07.784052012 +0000 UTC m=+809.435376414" watchObservedRunningTime="2026-01-27 14:25:07.785648081 +0000 UTC m=+809.436972483" Jan 27 14:25:07 crc kubenswrapper[4833]: I0127 14:25:07.810747 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-wp7pq" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.169418 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rljsl"] Jan 27 14:25:15 crc kubenswrapper[4833]: E0127 14:25:15.170201 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="extract-utilities" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.170216 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="extract-utilities" Jan 27 14:25:15 crc kubenswrapper[4833]: E0127 14:25:15.170238 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="registry-server" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.170246 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="registry-server" Jan 27 14:25:15 crc kubenswrapper[4833]: E0127 14:25:15.170259 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="extract-content" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.170267 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="extract-content" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.170391 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7caa2894-15d3-4e24-93e2-27fc607b0362" containerName="registry-server" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.171317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.196956 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rljsl"] Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.226153 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxms\" (UniqueName: \"kubernetes.io/projected/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-kube-api-access-srxms\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.226209 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-catalog-content\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.226343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-utilities\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.327358 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-utilities\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.327474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxms\" (UniqueName: \"kubernetes.io/projected/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-kube-api-access-srxms\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.327496 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-catalog-content\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.327852 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-utilities\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.327907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-catalog-content\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.351510 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxms\" (UniqueName: \"kubernetes.io/projected/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-kube-api-access-srxms\") pod \"redhat-marketplace-rljsl\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.486482 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.764546 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rljsl"] Jan 27 14:25:15 crc kubenswrapper[4833]: W0127 14:25:15.770682 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcc06c70_4eeb_474b_9f99_38af3ecf4ca7.slice/crio-72c9ef7241c361fc85052ebcdefe6fa77ec78044f35a37f7dfd85c08e01801a9 WatchSource:0}: Error finding container 72c9ef7241c361fc85052ebcdefe6fa77ec78044f35a37f7dfd85c08e01801a9: Status 404 returned error can't find the container with id 72c9ef7241c361fc85052ebcdefe6fa77ec78044f35a37f7dfd85c08e01801a9 Jan 27 14:25:15 crc kubenswrapper[4833]: I0127 14:25:15.794592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rljsl" event={"ID":"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7","Type":"ContainerStarted","Data":"72c9ef7241c361fc85052ebcdefe6fa77ec78044f35a37f7dfd85c08e01801a9"} Jan 27 14:25:16 crc kubenswrapper[4833]: I0127 14:25:16.802544 4833 generic.go:334] "Generic (PLEG): container finished" podID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerID="27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e" exitCode=0 Jan 27 14:25:16 crc kubenswrapper[4833]: I0127 14:25:16.802629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rljsl" event={"ID":"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7","Type":"ContainerDied","Data":"27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e"} Jan 27 14:25:17 crc kubenswrapper[4833]: I0127 14:25:17.064471 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7hv4r" Jan 27 14:25:17 crc kubenswrapper[4833]: I0127 14:25:17.812366 4833 generic.go:334] "Generic (PLEG): container finished" podID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerID="0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe" exitCode=0 Jan 27 14:25:17 crc kubenswrapper[4833]: I0127 14:25:17.812426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rljsl" event={"ID":"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7","Type":"ContainerDied","Data":"0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe"} Jan 27 14:25:18 crc kubenswrapper[4833]: I0127 14:25:18.819655 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rljsl" event={"ID":"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7","Type":"ContainerStarted","Data":"f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff"} Jan 27 14:25:18 crc kubenswrapper[4833]: I0127 14:25:18.835959 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rljsl" podStartSLOduration=2.408143261 podStartE2EDuration="3.835938911s" podCreationTimestamp="2026-01-27 14:25:15 +0000 UTC" firstStartedPulling="2026-01-27 14:25:16.804088263 +0000 UTC m=+818.455412665" lastFinishedPulling="2026-01-27 14:25:18.231883913 +0000 UTC m=+819.883208315" observedRunningTime="2026-01-27 14:25:18.832499426 +0000 UTC m=+820.483823848" watchObservedRunningTime="2026-01-27 14:25:18.835938911 +0000 UTC m=+820.487263323" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.736581 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6x5p5"] Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.738716 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.764142 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x5p5"] Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.868352 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcstf\" (UniqueName: \"kubernetes.io/projected/6029a94a-e5fd-4a74-85c2-0d369c533708-kube-api-access-bcstf\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.868411 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-utilities\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.868475 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-catalog-content\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.970305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcstf\" (UniqueName: \"kubernetes.io/projected/6029a94a-e5fd-4a74-85c2-0d369c533708-kube-api-access-bcstf\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.970685 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-utilities\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.970745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-catalog-content\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.971126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-utilities\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.971207 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-catalog-content\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:22 crc kubenswrapper[4833]: I0127 14:25:22.999612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcstf\" (UniqueName: \"kubernetes.io/projected/6029a94a-e5fd-4a74-85c2-0d369c533708-kube-api-access-bcstf\") pod \"certified-operators-6x5p5\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:23 crc kubenswrapper[4833]: I0127 14:25:23.077285 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:23 crc kubenswrapper[4833]: I0127 14:25:23.474849 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x5p5"] Jan 27 14:25:23 crc kubenswrapper[4833]: I0127 14:25:23.864248 4833 generic.go:334] "Generic (PLEG): container finished" podID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerID="ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67" exitCode=0 Jan 27 14:25:23 crc kubenswrapper[4833]: I0127 14:25:23.864301 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x5p5" event={"ID":"6029a94a-e5fd-4a74-85c2-0d369c533708","Type":"ContainerDied","Data":"ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67"} Jan 27 14:25:23 crc kubenswrapper[4833]: I0127 14:25:23.864330 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x5p5" event={"ID":"6029a94a-e5fd-4a74-85c2-0d369c533708","Type":"ContainerStarted","Data":"a7fdffe3d4e5c873b6c6541fda253505cdbf137881e466826df9546167217759"} Jan 27 14:25:24 crc kubenswrapper[4833]: I0127 14:25:24.872998 4833 generic.go:334] "Generic (PLEG): container finished" podID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerID="995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123" exitCode=0 Jan 27 14:25:24 crc kubenswrapper[4833]: I0127 14:25:24.873251 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x5p5" event={"ID":"6029a94a-e5fd-4a74-85c2-0d369c533708","Type":"ContainerDied","Data":"995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123"} Jan 27 14:25:25 crc kubenswrapper[4833]: I0127 14:25:25.486741 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:25 crc kubenswrapper[4833]: I0127 14:25:25.487100 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:25 crc kubenswrapper[4833]: I0127 14:25:25.535888 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:25 crc kubenswrapper[4833]: I0127 14:25:25.884252 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x5p5" event={"ID":"6029a94a-e5fd-4a74-85c2-0d369c533708","Type":"ContainerStarted","Data":"721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194"} Jan 27 14:25:25 crc kubenswrapper[4833]: I0127 14:25:25.910389 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6x5p5" podStartSLOduration=2.532171279 podStartE2EDuration="3.910370121s" podCreationTimestamp="2026-01-27 14:25:22 +0000 UTC" firstStartedPulling="2026-01-27 14:25:23.866389542 +0000 UTC m=+825.517713964" lastFinishedPulling="2026-01-27 14:25:25.244588404 +0000 UTC m=+826.895912806" observedRunningTime="2026-01-27 14:25:25.903563862 +0000 UTC m=+827.554888294" watchObservedRunningTime="2026-01-27 14:25:25.910370121 +0000 UTC m=+827.561694533" Jan 27 14:25:25 crc kubenswrapper[4833]: I0127 14:25:25.923650 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:27 crc kubenswrapper[4833]: I0127 14:25:27.927874 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rljsl"] Jan 27 14:25:27 crc kubenswrapper[4833]: I0127 14:25:27.928671 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rljsl" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="registry-server" containerID="cri-o://f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff" gracePeriod=2 Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.260194 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.359683 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srxms\" (UniqueName: \"kubernetes.io/projected/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-kube-api-access-srxms\") pod \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.359830 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-utilities\") pod \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.359922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-catalog-content\") pod \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\" (UID: \"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7\") " Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.363349 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-utilities" (OuterVolumeSpecName: "utilities") pod "bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" (UID: "bcc06c70-4eeb-474b-9f99-38af3ecf4ca7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.381299 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-kube-api-access-srxms" (OuterVolumeSpecName: "kube-api-access-srxms") pod "bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" (UID: "bcc06c70-4eeb-474b-9f99-38af3ecf4ca7"). InnerVolumeSpecName "kube-api-access-srxms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.399320 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" (UID: "bcc06c70-4eeb-474b-9f99-38af3ecf4ca7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.461552 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.461590 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.461603 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srxms\" (UniqueName: \"kubernetes.io/projected/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7-kube-api-access-srxms\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.908301 4833 generic.go:334] "Generic (PLEG): container finished" podID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerID="f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff" exitCode=0 Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.908376 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rljsl" event={"ID":"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7","Type":"ContainerDied","Data":"f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff"} Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.908421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rljsl" event={"ID":"bcc06c70-4eeb-474b-9f99-38af3ecf4ca7","Type":"ContainerDied","Data":"72c9ef7241c361fc85052ebcdefe6fa77ec78044f35a37f7dfd85c08e01801a9"} Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.908468 4833 scope.go:117] "RemoveContainer" containerID="f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.908669 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rljsl" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.942259 4833 scope.go:117] "RemoveContainer" containerID="0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe" Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.956015 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rljsl"] Jan 27 14:25:28 crc kubenswrapper[4833]: I0127 14:25:28.977641 4833 scope.go:117] "RemoveContainer" containerID="27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:28.977767 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rljsl"] Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.019992 4833 scope.go:117] "RemoveContainer" containerID="f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff" Jan 27 14:25:29 crc kubenswrapper[4833]: E0127 14:25:29.027682 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff\": container with ID starting with f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff not found: ID does not exist" containerID="f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.027734 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff"} err="failed to get container status \"f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff\": rpc error: code = NotFound desc = could not find container \"f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff\": container with ID starting with f260fc30c4ffdb3eea2c36f3b359511bbca80572ec4a3334552ade9a226cdbff not found: ID does not exist" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.027759 4833 scope.go:117] "RemoveContainer" containerID="0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe" Jan 27 14:25:29 crc kubenswrapper[4833]: E0127 14:25:29.038596 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe\": container with ID starting with 0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe not found: ID does not exist" containerID="0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.038646 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe"} err="failed to get container status \"0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe\": rpc error: code = NotFound desc = could not find container \"0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe\": container with ID starting with 0370c72a47ba3ac6a4b052517add3b6c7de57ed0ce9e46912090f8d70586d2fe not found: ID does not exist" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.038672 4833 scope.go:117] "RemoveContainer" containerID="27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e" Jan 27 14:25:29 crc kubenswrapper[4833]: E0127 14:25:29.053323 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e\": container with ID starting with 27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e not found: ID does not exist" containerID="27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.053367 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e"} err="failed to get container status \"27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e\": rpc error: code = NotFound desc = could not find container \"27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e\": container with ID starting with 27f718205844458d537db3e87b3eb79420846363eebf32ab812b55416eac829e not found: ID does not exist" Jan 27 14:25:29 crc kubenswrapper[4833]: I0127 14:25:29.217665 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" path="/var/lib/kubelet/pods/bcc06c70-4eeb-474b-9f99-38af3ecf4ca7/volumes" Jan 27 14:25:33 crc kubenswrapper[4833]: I0127 14:25:33.078176 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:33 crc kubenswrapper[4833]: I0127 14:25:33.078734 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:33 crc kubenswrapper[4833]: I0127 14:25:33.115032 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:33 crc kubenswrapper[4833]: I0127 14:25:33.995422 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:34 crc kubenswrapper[4833]: I0127 14:25:34.038027 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x5p5"] Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.195500 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7"] Jan 27 14:25:35 crc kubenswrapper[4833]: E0127 14:25:35.195692 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="extract-content" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.195702 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="extract-content" Jan 27 14:25:35 crc kubenswrapper[4833]: E0127 14:25:35.195718 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="registry-server" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.195723 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="registry-server" Jan 27 14:25:35 crc kubenswrapper[4833]: E0127 14:25:35.195735 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="extract-utilities" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.195741 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="extract-utilities" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.195833 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc06c70-4eeb-474b-9f99-38af3ecf4ca7" containerName="registry-server" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.196712 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.198842 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.219786 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7"] Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.360141 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.360199 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxmx\" (UniqueName: \"kubernetes.io/projected/dfd1a4fc-5465-4778-9c41-4be0cf541237-kube-api-access-lsxmx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.360243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.461259 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.461317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxmx\" (UniqueName: \"kubernetes.io/projected/dfd1a4fc-5465-4778-9c41-4be0cf541237-kube-api-access-lsxmx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.461359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.462135 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.462407 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.502108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxmx\" (UniqueName: \"kubernetes.io/projected/dfd1a4fc-5465-4778-9c41-4be0cf541237-kube-api-access-lsxmx\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.525260 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.930107 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7"] Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.956271 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" event={"ID":"dfd1a4fc-5465-4778-9c41-4be0cf541237","Type":"ContainerStarted","Data":"ec7161ef4a5f1b709eb594c537c9695d0e7083197499d7bef39ecd9c51f73116"} Jan 27 14:25:35 crc kubenswrapper[4833]: I0127 14:25:35.956437 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6x5p5" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="registry-server" containerID="cri-o://721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194" gracePeriod=2 Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.289202 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.371864 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-catalog-content\") pod \"6029a94a-e5fd-4a74-85c2-0d369c533708\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.371995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcstf\" (UniqueName: \"kubernetes.io/projected/6029a94a-e5fd-4a74-85c2-0d369c533708-kube-api-access-bcstf\") pod \"6029a94a-e5fd-4a74-85c2-0d369c533708\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.372102 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-utilities\") pod \"6029a94a-e5fd-4a74-85c2-0d369c533708\" (UID: \"6029a94a-e5fd-4a74-85c2-0d369c533708\") " Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.373416 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-utilities" (OuterVolumeSpecName: "utilities") pod "6029a94a-e5fd-4a74-85c2-0d369c533708" (UID: "6029a94a-e5fd-4a74-85c2-0d369c533708"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.375796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6029a94a-e5fd-4a74-85c2-0d369c533708-kube-api-access-bcstf" (OuterVolumeSpecName: "kube-api-access-bcstf") pod "6029a94a-e5fd-4a74-85c2-0d369c533708" (UID: "6029a94a-e5fd-4a74-85c2-0d369c533708"). InnerVolumeSpecName "kube-api-access-bcstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.415283 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6029a94a-e5fd-4a74-85c2-0d369c533708" (UID: "6029a94a-e5fd-4a74-85c2-0d369c533708"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.473225 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.473274 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6029a94a-e5fd-4a74-85c2-0d369c533708-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.473287 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcstf\" (UniqueName: \"kubernetes.io/projected/6029a94a-e5fd-4a74-85c2-0d369c533708-kube-api-access-bcstf\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.965381 4833 generic.go:334] "Generic (PLEG): container finished" podID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerID="721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194" exitCode=0 Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.965505 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x5p5" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.965548 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x5p5" event={"ID":"6029a94a-e5fd-4a74-85c2-0d369c533708","Type":"ContainerDied","Data":"721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194"} Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.965591 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x5p5" event={"ID":"6029a94a-e5fd-4a74-85c2-0d369c533708","Type":"ContainerDied","Data":"a7fdffe3d4e5c873b6c6541fda253505cdbf137881e466826df9546167217759"} Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.965619 4833 scope.go:117] "RemoveContainer" containerID="721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194" Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.967287 4833 generic.go:334] "Generic (PLEG): container finished" podID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerID="126ba65284fb09816cfc236de8cdcf5c760369ea3e7af092b7a0da8f4ad318fb" exitCode=0 Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.967331 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" event={"ID":"dfd1a4fc-5465-4778-9c41-4be0cf541237","Type":"ContainerDied","Data":"126ba65284fb09816cfc236de8cdcf5c760369ea3e7af092b7a0da8f4ad318fb"} Jan 27 14:25:36 crc kubenswrapper[4833]: I0127 14:25:36.994532 4833 scope.go:117] "RemoveContainer" containerID="995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.004690 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x5p5"] Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.011064 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6x5p5"] Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.020814 4833 scope.go:117] "RemoveContainer" containerID="ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.045716 4833 scope.go:117] "RemoveContainer" containerID="721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194" Jan 27 14:25:37 crc kubenswrapper[4833]: E0127 14:25:37.046237 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194\": container with ID starting with 721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194 not found: ID does not exist" containerID="721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.046301 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194"} err="failed to get container status \"721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194\": rpc error: code = NotFound desc = could not find container \"721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194\": container with ID starting with 721b0d4c15a213be7b78404d9274889576214de88df43526e698afcbb8cdf194 not found: ID does not exist" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.046348 4833 scope.go:117] "RemoveContainer" containerID="995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123" Jan 27 14:25:37 crc kubenswrapper[4833]: E0127 14:25:37.046770 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123\": container with ID starting with 995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123 not found: ID does not exist" containerID="995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.046818 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123"} err="failed to get container status \"995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123\": rpc error: code = NotFound desc = could not find container \"995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123\": container with ID starting with 995be9e1861cbb620f86be92a637661200fac7566aea3dd5a57e64bb4307d123 not found: ID does not exist" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.046847 4833 scope.go:117] "RemoveContainer" containerID="ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67" Jan 27 14:25:37 crc kubenswrapper[4833]: E0127 14:25:37.047245 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67\": container with ID starting with ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67 not found: ID does not exist" containerID="ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.047380 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67"} err="failed to get container status \"ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67\": rpc error: code = NotFound desc = could not find container \"ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67\": container with ID starting with ff7f99e90c9d7b0b03d507d23b2575b81038f993db68c9c72bcb41f1745caf67 not found: ID does not exist" Jan 27 14:25:37 crc kubenswrapper[4833]: I0127 14:25:37.224867 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" path="/var/lib/kubelet/pods/6029a94a-e5fd-4a74-85c2-0d369c533708/volumes" Jan 27 14:25:38 crc kubenswrapper[4833]: I0127 14:25:38.981433 4833 generic.go:334] "Generic (PLEG): container finished" podID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerID="2c4030456cfbf92ce40873399b08e6cb90d4da9ba1853ac0bde07bf12178089c" exitCode=0 Jan 27 14:25:38 crc kubenswrapper[4833]: I0127 14:25:38.981530 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" event={"ID":"dfd1a4fc-5465-4778-9c41-4be0cf541237","Type":"ContainerDied","Data":"2c4030456cfbf92ce40873399b08e6cb90d4da9ba1853ac0bde07bf12178089c"} Jan 27 14:25:40 crc kubenswrapper[4833]: I0127 14:25:40.000676 4833 generic.go:334] "Generic (PLEG): container finished" podID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerID="91f075251ac8d5f1e6b320ee4a899ca860d10e8ab0ed35f155606a724711d3e2" exitCode=0 Jan 27 14:25:40 crc kubenswrapper[4833]: I0127 14:25:40.000742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" event={"ID":"dfd1a4fc-5465-4778-9c41-4be0cf541237","Type":"ContainerDied","Data":"91f075251ac8d5f1e6b320ee4a899ca860d10e8ab0ed35f155606a724711d3e2"} Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.265358 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.339365 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsxmx\" (UniqueName: \"kubernetes.io/projected/dfd1a4fc-5465-4778-9c41-4be0cf541237-kube-api-access-lsxmx\") pod \"dfd1a4fc-5465-4778-9c41-4be0cf541237\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.339436 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-bundle\") pod \"dfd1a4fc-5465-4778-9c41-4be0cf541237\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.339580 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-util\") pod \"dfd1a4fc-5465-4778-9c41-4be0cf541237\" (UID: \"dfd1a4fc-5465-4778-9c41-4be0cf541237\") " Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.340288 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-bundle" (OuterVolumeSpecName: "bundle") pod "dfd1a4fc-5465-4778-9c41-4be0cf541237" (UID: "dfd1a4fc-5465-4778-9c41-4be0cf541237"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.346105 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd1a4fc-5465-4778-9c41-4be0cf541237-kube-api-access-lsxmx" (OuterVolumeSpecName: "kube-api-access-lsxmx") pod "dfd1a4fc-5465-4778-9c41-4be0cf541237" (UID: "dfd1a4fc-5465-4778-9c41-4be0cf541237"). InnerVolumeSpecName "kube-api-access-lsxmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.354952 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-util" (OuterVolumeSpecName: "util") pod "dfd1a4fc-5465-4778-9c41-4be0cf541237" (UID: "dfd1a4fc-5465-4778-9c41-4be0cf541237"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.441424 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.441499 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsxmx\" (UniqueName: \"kubernetes.io/projected/dfd1a4fc-5465-4778-9c41-4be0cf541237-kube-api-access-lsxmx\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:41 crc kubenswrapper[4833]: I0127 14:25:41.441514 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfd1a4fc-5465-4778-9c41-4be0cf541237-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:42 crc kubenswrapper[4833]: I0127 14:25:42.017516 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" event={"ID":"dfd1a4fc-5465-4778-9c41-4be0cf541237","Type":"ContainerDied","Data":"ec7161ef4a5f1b709eb594c537c9695d0e7083197499d7bef39ecd9c51f73116"} Jan 27 14:25:42 crc kubenswrapper[4833]: I0127 14:25:42.017567 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7161ef4a5f1b709eb594c537c9695d0e7083197499d7bef39ecd9c51f73116" Jan 27 14:25:42 crc kubenswrapper[4833]: I0127 14:25:42.017629 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498097 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bmcd8"] Jan 27 14:25:44 crc kubenswrapper[4833]: E0127 14:25:44.498656 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="extract-utilities" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498673 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="extract-utilities" Jan 27 14:25:44 crc kubenswrapper[4833]: E0127 14:25:44.498692 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="util" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498701 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="util" Jan 27 14:25:44 crc kubenswrapper[4833]: E0127 14:25:44.498710 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="pull" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498720 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="pull" Jan 27 14:25:44 crc kubenswrapper[4833]: E0127 14:25:44.498727 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="extract" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498733 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="extract" Jan 27 14:25:44 crc kubenswrapper[4833]: E0127 14:25:44.498748 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="registry-server" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498755 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="registry-server" Jan 27 14:25:44 crc kubenswrapper[4833]: E0127 14:25:44.498767 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="extract-content" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498775 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="extract-content" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498911 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6029a94a-e5fd-4a74-85c2-0d369c533708" containerName="registry-server" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.498931 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd1a4fc-5465-4778-9c41-4be0cf541237" containerName="extract" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.499396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.501323 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.501664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9v4x4" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.502455 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.508291 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bmcd8"] Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.583298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnkl\" (UniqueName: \"kubernetes.io/projected/a65650e4-9186-4e0a-a896-d372f80b1843-kube-api-access-fxnkl\") pod \"nmstate-operator-646758c888-bmcd8\" (UID: \"a65650e4-9186-4e0a-a896-d372f80b1843\") " pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.684350 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxnkl\" (UniqueName: \"kubernetes.io/projected/a65650e4-9186-4e0a-a896-d372f80b1843-kube-api-access-fxnkl\") pod \"nmstate-operator-646758c888-bmcd8\" (UID: \"a65650e4-9186-4e0a-a896-d372f80b1843\") " pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.716908 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxnkl\" (UniqueName: \"kubernetes.io/projected/a65650e4-9186-4e0a-a896-d372f80b1843-kube-api-access-fxnkl\") pod \"nmstate-operator-646758c888-bmcd8\" (UID: \"a65650e4-9186-4e0a-a896-d372f80b1843\") " pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" Jan 27 14:25:44 crc kubenswrapper[4833]: I0127 14:25:44.814482 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" Jan 27 14:25:45 crc kubenswrapper[4833]: I0127 14:25:45.039089 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-bmcd8"] Jan 27 14:25:46 crc kubenswrapper[4833]: I0127 14:25:46.049671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" event={"ID":"a65650e4-9186-4e0a-a896-d372f80b1843","Type":"ContainerStarted","Data":"8e8c54bbac30e93380275252077d635353fde1060bcbe681e08a241322e4b66c"} Jan 27 14:25:56 crc kubenswrapper[4833]: I0127 14:25:56.119972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" event={"ID":"a65650e4-9186-4e0a-a896-d372f80b1843","Type":"ContainerStarted","Data":"10be8768c98a38362b2b454c573a7e15e8d022a42ab7e7ba18b0e3249e52d5b9"} Jan 27 14:25:56 crc kubenswrapper[4833]: I0127 14:25:56.138050 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-bmcd8" podStartSLOduration=1.553349547 podStartE2EDuration="12.138030784s" podCreationTimestamp="2026-01-27 14:25:44 +0000 UTC" firstStartedPulling="2026-01-27 14:25:45.059604537 +0000 UTC m=+846.710928959" lastFinishedPulling="2026-01-27 14:25:55.644285784 +0000 UTC m=+857.295610196" observedRunningTime="2026-01-27 14:25:56.137082131 +0000 UTC m=+857.788406543" watchObservedRunningTime="2026-01-27 14:25:56.138030784 +0000 UTC m=+857.789355206" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.239678 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-424jq"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.240888 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.243351 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.244211 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.244999 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jsnr5" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.246148 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.258762 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-424jq"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.269439 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.275131 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8151e9d2-3b0b-494c-b563-3d0615d6d513-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.275194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mq6w\" (UniqueName: \"kubernetes.io/projected/088ff1e3-c56b-4336-a7ba-3728a0f1a9ff-kube-api-access-5mq6w\") pod \"nmstate-metrics-54757c584b-424jq\" (UID: \"088ff1e3-c56b-4336-a7ba-3728a0f1a9ff\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.275227 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqqdn\" (UniqueName: \"kubernetes.io/projected/8151e9d2-3b0b-494c-b563-3d0615d6d513-kube-api-access-fqqdn\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.284741 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8fprr"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.285985 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.356902 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.357568 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.359203 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-s4ngn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.359347 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.359543 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-ovs-socket\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376274 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-nmstate-lock\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mq6w\" (UniqueName: \"kubernetes.io/projected/088ff1e3-c56b-4336-a7ba-3728a0f1a9ff-kube-api-access-5mq6w\") pod \"nmstate-metrics-54757c584b-424jq\" (UID: \"088ff1e3-c56b-4336-a7ba-3728a0f1a9ff\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376324 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqqdn\" (UniqueName: \"kubernetes.io/projected/8151e9d2-3b0b-494c-b563-3d0615d6d513-kube-api-access-fqqdn\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hlg\" (UniqueName: \"kubernetes.io/projected/845eae34-dda9-41fe-8388-891597f61a06-kube-api-access-n6hlg\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376423 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-dbus-socket\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.376501 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8151e9d2-3b0b-494c-b563-3d0615d6d513-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: E0127 14:25:57.376593 4833 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 14:25:57 crc kubenswrapper[4833]: E0127 14:25:57.376643 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8151e9d2-3b0b-494c-b563-3d0615d6d513-tls-key-pair podName:8151e9d2-3b0b-494c-b563-3d0615d6d513 nodeName:}" failed. No retries permitted until 2026-01-27 14:25:57.876629894 +0000 UTC m=+859.527954296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/8151e9d2-3b0b-494c-b563-3d0615d6d513-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-hfmfn" (UID: "8151e9d2-3b0b-494c-b563-3d0615d6d513") : secret "openshift-nmstate-webhook" not found Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.394283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqqdn\" (UniqueName: \"kubernetes.io/projected/8151e9d2-3b0b-494c-b563-3d0615d6d513-kube-api-access-fqqdn\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.398581 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mq6w\" (UniqueName: \"kubernetes.io/projected/088ff1e3-c56b-4336-a7ba-3728a0f1a9ff-kube-api-access-5mq6w\") pod \"nmstate-metrics-54757c584b-424jq\" (UID: \"088ff1e3-c56b-4336-a7ba-3728a0f1a9ff\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.399977 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.477978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6hlg\" (UniqueName: \"kubernetes.io/projected/845eae34-dda9-41fe-8388-891597f61a06-kube-api-access-n6hlg\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-dbus-socket\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-ovs-socket\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-nmstate-lock\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478217 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478246 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-ovs-socket\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478253 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-nmstate-lock\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42x7x\" (UniqueName: \"kubernetes.io/projected/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-kube-api-access-42x7x\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.478456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/845eae34-dda9-41fe-8388-891597f61a06-dbus-socket\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.494142 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6hlg\" (UniqueName: \"kubernetes.io/projected/845eae34-dda9-41fe-8388-891597f61a06-kube-api-access-n6hlg\") pod \"nmstate-handler-8fprr\" (UID: \"845eae34-dda9-41fe-8388-891597f61a06\") " pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.550589 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bc7f74f64-8njpx"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.551240 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.565403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.573768 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bc7f74f64-8njpx"] Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579215 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-service-ca\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-console-config\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nftpn\" (UniqueName: \"kubernetes.io/projected/c5f4813e-42e9-461b-9449-15801d74841f-kube-api-access-nftpn\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f4813e-42e9-461b-9449-15801d74841f-console-serving-cert\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579332 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-oauth-serving-cert\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579347 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42x7x\" (UniqueName: \"kubernetes.io/projected/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-kube-api-access-42x7x\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-trusted-ca-bundle\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.579424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5f4813e-42e9-461b-9449-15801d74841f-console-oauth-config\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.580187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: E0127 14:25:57.580293 4833 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 14:25:57 crc kubenswrapper[4833]: E0127 14:25:57.580330 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-plugin-serving-cert podName:b3435cdb-b0c8-4961-9cdb-cf92c4c03c01 nodeName:}" failed. No retries permitted until 2026-01-27 14:25:58.080320113 +0000 UTC m=+859.731644515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-srxsb" (UID: "b3435cdb-b0c8-4961-9cdb-cf92c4c03c01") : secret "plugin-serving-cert" not found Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.600634 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.601696 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42x7x\" (UniqueName: \"kubernetes.io/projected/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-kube-api-access-42x7x\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:57 crc kubenswrapper[4833]: W0127 14:25:57.617425 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod845eae34_dda9_41fe_8388_891597f61a06.slice/crio-872bc26081f83be76c14f8670ba882884f4d4cb2d8a2f74e61aec7c4e6d1b074 WatchSource:0}: Error finding container 872bc26081f83be76c14f8670ba882884f4d4cb2d8a2f74e61aec7c4e6d1b074: Status 404 returned error can't find the container with id 872bc26081f83be76c14f8670ba882884f4d4cb2d8a2f74e61aec7c4e6d1b074 Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.681986 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-trusted-ca-bundle\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.682033 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5f4813e-42e9-461b-9449-15801d74841f-console-oauth-config\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.682077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-service-ca\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.682117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-console-config\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.682136 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nftpn\" (UniqueName: \"kubernetes.io/projected/c5f4813e-42e9-461b-9449-15801d74841f-kube-api-access-nftpn\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.682159 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f4813e-42e9-461b-9449-15801d74841f-console-serving-cert\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.682176 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-oauth-serving-cert\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.684792 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-oauth-serving-cert\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.684910 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-service-ca\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.685860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-trusted-ca-bundle\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.688311 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c5f4813e-42e9-461b-9449-15801d74841f-console-config\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.695620 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f4813e-42e9-461b-9449-15801d74841f-console-serving-cert\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.696382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c5f4813e-42e9-461b-9449-15801d74841f-console-oauth-config\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.709648 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nftpn\" (UniqueName: \"kubernetes.io/projected/c5f4813e-42e9-461b-9449-15801d74841f-kube-api-access-nftpn\") pod \"console-6bc7f74f64-8njpx\" (UID: \"c5f4813e-42e9-461b-9449-15801d74841f\") " pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.869005 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.886954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8151e9d2-3b0b-494c-b563-3d0615d6d513-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.890314 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8151e9d2-3b0b-494c-b563-3d0615d6d513-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hfmfn\" (UID: \"8151e9d2-3b0b-494c-b563-3d0615d6d513\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:57 crc kubenswrapper[4833]: I0127 14:25:57.988912 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-424jq"] Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.093620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.098526 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3435cdb-b0c8-4961-9cdb-cf92c4c03c01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-srxsb\" (UID: \"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.133622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" event={"ID":"088ff1e3-c56b-4336-a7ba-3728a0f1a9ff","Type":"ContainerStarted","Data":"fc0bf3e2401763b37c0260f4347ffb17ffd32f7ce35abfa2675053bbb6c12f94"} Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.134515 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8fprr" event={"ID":"845eae34-dda9-41fe-8388-891597f61a06","Type":"ContainerStarted","Data":"872bc26081f83be76c14f8670ba882884f4d4cb2d8a2f74e61aec7c4e6d1b074"} Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.176455 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.270316 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.335337 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bc7f74f64-8njpx"] Jan 27 14:25:58 crc kubenswrapper[4833]: W0127 14:25:58.355631 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5f4813e_42e9_461b_9449_15801d74841f.slice/crio-19ace3bfdc979581d56dfda3bd9f3c6363bf767a7bcab3957254a5c75308c603 WatchSource:0}: Error finding container 19ace3bfdc979581d56dfda3bd9f3c6363bf767a7bcab3957254a5c75308c603: Status 404 returned error can't find the container with id 19ace3bfdc979581d56dfda3bd9f3c6363bf767a7bcab3957254a5c75308c603 Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.647254 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn"] Jan 27 14:25:58 crc kubenswrapper[4833]: W0127 14:25:58.649417 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8151e9d2_3b0b_494c_b563_3d0615d6d513.slice/crio-9a4dc25906d4e5169f8ecd59b9dbfeeecd5a2495a3b0402e0d4e262b9775f4e3 WatchSource:0}: Error finding container 9a4dc25906d4e5169f8ecd59b9dbfeeecd5a2495a3b0402e0d4e262b9775f4e3: Status 404 returned error can't find the container with id 9a4dc25906d4e5169f8ecd59b9dbfeeecd5a2495a3b0402e0d4e262b9775f4e3 Jan 27 14:25:58 crc kubenswrapper[4833]: I0127 14:25:58.712651 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb"] Jan 27 14:25:58 crc kubenswrapper[4833]: W0127 14:25:58.716008 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3435cdb_b0c8_4961_9cdb_cf92c4c03c01.slice/crio-d90d47acae4222da12baf898edee06249208c32d31cebe5390bffd3109504dcd WatchSource:0}: Error finding container d90d47acae4222da12baf898edee06249208c32d31cebe5390bffd3109504dcd: Status 404 returned error can't find the container with id d90d47acae4222da12baf898edee06249208c32d31cebe5390bffd3109504dcd Jan 27 14:25:59 crc kubenswrapper[4833]: I0127 14:25:59.143824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" event={"ID":"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01","Type":"ContainerStarted","Data":"d90d47acae4222da12baf898edee06249208c32d31cebe5390bffd3109504dcd"} Jan 27 14:25:59 crc kubenswrapper[4833]: I0127 14:25:59.145060 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" event={"ID":"8151e9d2-3b0b-494c-b563-3d0615d6d513","Type":"ContainerStarted","Data":"9a4dc25906d4e5169f8ecd59b9dbfeeecd5a2495a3b0402e0d4e262b9775f4e3"} Jan 27 14:25:59 crc kubenswrapper[4833]: I0127 14:25:59.146577 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc7f74f64-8njpx" event={"ID":"c5f4813e-42e9-461b-9449-15801d74841f","Type":"ContainerStarted","Data":"1d48fb6ba631d31ed7b48c00a7f16274a4a11724fbd419f1a905335b6c1d4538"} Jan 27 14:25:59 crc kubenswrapper[4833]: I0127 14:25:59.146606 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bc7f74f64-8njpx" event={"ID":"c5f4813e-42e9-461b-9449-15801d74841f","Type":"ContainerStarted","Data":"19ace3bfdc979581d56dfda3bd9f3c6363bf767a7bcab3957254a5c75308c603"} Jan 27 14:25:59 crc kubenswrapper[4833]: I0127 14:25:59.172222 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bc7f74f64-8njpx" podStartSLOduration=2.172180006 podStartE2EDuration="2.172180006s" podCreationTimestamp="2026-01-27 14:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:25:59.167140251 +0000 UTC m=+860.818464653" watchObservedRunningTime="2026-01-27 14:25:59.172180006 +0000 UTC m=+860.823504408" Jan 27 14:26:00 crc kubenswrapper[4833]: I0127 14:26:00.154954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" event={"ID":"088ff1e3-c56b-4336-a7ba-3728a0f1a9ff","Type":"ContainerStarted","Data":"edd5e7e6681c471869ad7bbafc05069fde8fa76a38f706af242c7f1a36140fc9"} Jan 27 14:26:00 crc kubenswrapper[4833]: I0127 14:26:00.157302 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" event={"ID":"8151e9d2-3b0b-494c-b563-3d0615d6d513","Type":"ContainerStarted","Data":"8929cd66a796e20f0a074fc026245011d06d8b328a60241dc284f800c367eb81"} Jan 27 14:26:00 crc kubenswrapper[4833]: I0127 14:26:00.157787 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:26:00 crc kubenswrapper[4833]: I0127 14:26:00.176191 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" podStartSLOduration=1.9075001089999999 podStartE2EDuration="3.176169452s" podCreationTimestamp="2026-01-27 14:25:57 +0000 UTC" firstStartedPulling="2026-01-27 14:25:58.651654594 +0000 UTC m=+860.302978996" lastFinishedPulling="2026-01-27 14:25:59.920323937 +0000 UTC m=+861.571648339" observedRunningTime="2026-01-27 14:26:00.174341156 +0000 UTC m=+861.825665558" watchObservedRunningTime="2026-01-27 14:26:00.176169452 +0000 UTC m=+861.827493864" Jan 27 14:26:01 crc kubenswrapper[4833]: I0127 14:26:01.165835 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8fprr" event={"ID":"845eae34-dda9-41fe-8388-891597f61a06","Type":"ContainerStarted","Data":"ba1ce3527970932d7b4ef0a2b8f34d10c2b888339bb9ca368cfbb93235a0ec16"} Jan 27 14:26:01 crc kubenswrapper[4833]: I0127 14:26:01.166194 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:26:01 crc kubenswrapper[4833]: I0127 14:26:01.167790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" event={"ID":"b3435cdb-b0c8-4961-9cdb-cf92c4c03c01","Type":"ContainerStarted","Data":"c790d748b388890808b59666cd1854393a9f2c4c3163e9b1df3ae1006598a46b"} Jan 27 14:26:01 crc kubenswrapper[4833]: I0127 14:26:01.185657 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8fprr" podStartSLOduration=1.8874015370000001 podStartE2EDuration="4.185636854s" podCreationTimestamp="2026-01-27 14:25:57 +0000 UTC" firstStartedPulling="2026-01-27 14:25:57.619878304 +0000 UTC m=+859.271202706" lastFinishedPulling="2026-01-27 14:25:59.918113621 +0000 UTC m=+861.569438023" observedRunningTime="2026-01-27 14:26:01.18307473 +0000 UTC m=+862.834399192" watchObservedRunningTime="2026-01-27 14:26:01.185636854 +0000 UTC m=+862.836961256" Jan 27 14:26:01 crc kubenswrapper[4833]: I0127 14:26:01.202798 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-srxsb" podStartSLOduration=2.180051474 podStartE2EDuration="4.202776304s" podCreationTimestamp="2026-01-27 14:25:57 +0000 UTC" firstStartedPulling="2026-01-27 14:25:58.718111099 +0000 UTC m=+860.369435501" lastFinishedPulling="2026-01-27 14:26:00.740835929 +0000 UTC m=+862.392160331" observedRunningTime="2026-01-27 14:26:01.197782449 +0000 UTC m=+862.849106861" watchObservedRunningTime="2026-01-27 14:26:01.202776304 +0000 UTC m=+862.854100706" Jan 27 14:26:03 crc kubenswrapper[4833]: I0127 14:26:03.182146 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" event={"ID":"088ff1e3-c56b-4336-a7ba-3728a0f1a9ff","Type":"ContainerStarted","Data":"96289643a4ab77419ca4f68d3927ccfc6a94257280c28607bb8445eecd54bbe7"} Jan 27 14:26:07 crc kubenswrapper[4833]: I0127 14:26:07.640618 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8fprr" Jan 27 14:26:07 crc kubenswrapper[4833]: I0127 14:26:07.661439 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-424jq" podStartSLOduration=6.025694742 podStartE2EDuration="10.66141553s" podCreationTimestamp="2026-01-27 14:25:57 +0000 UTC" firstStartedPulling="2026-01-27 14:25:58.001547859 +0000 UTC m=+859.652872271" lastFinishedPulling="2026-01-27 14:26:02.637268647 +0000 UTC m=+864.288593059" observedRunningTime="2026-01-27 14:26:03.213154385 +0000 UTC m=+864.864478797" watchObservedRunningTime="2026-01-27 14:26:07.66141553 +0000 UTC m=+869.312739972" Jan 27 14:26:07 crc kubenswrapper[4833]: I0127 14:26:07.870266 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:26:07 crc kubenswrapper[4833]: I0127 14:26:07.870433 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:26:07 crc kubenswrapper[4833]: I0127 14:26:07.875327 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:26:08 crc kubenswrapper[4833]: I0127 14:26:08.230974 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bc7f74f64-8njpx" Jan 27 14:26:08 crc kubenswrapper[4833]: I0127 14:26:08.334199 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rrz8c"] Jan 27 14:26:18 crc kubenswrapper[4833]: I0127 14:26:18.182250 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hfmfn" Jan 27 14:26:32 crc kubenswrapper[4833]: I0127 14:26:32.859241 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w"] Jan 27 14:26:32 crc kubenswrapper[4833]: I0127 14:26:32.861248 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:32 crc kubenswrapper[4833]: I0127 14:26:32.864277 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:26:32 crc kubenswrapper[4833]: I0127 14:26:32.872224 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w"] Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.009577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.009636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzsd\" (UniqueName: \"kubernetes.io/projected/e8423c47-b673-4d7f-ace2-68f58b293b5d-kube-api-access-jmzsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.009676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.114031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.114117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzsd\" (UniqueName: \"kubernetes.io/projected/e8423c47-b673-4d7f-ace2-68f58b293b5d-kube-api-access-jmzsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.114175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.115049 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.115491 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.153177 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzsd\" (UniqueName: \"kubernetes.io/projected/e8423c47-b673-4d7f-ace2-68f58b293b5d-kube-api-access-jmzsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.178466 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.376197 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-rrz8c" podUID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" containerName="console" containerID="cri-o://617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26" gracePeriod=15 Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.409731 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w"] Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.814391 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rrz8c_15e99fdb-21ba-4e48-a4d3-6e93f9907413/console/0.log" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.814519 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.927851 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-trusted-ca-bundle\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.927923 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-config\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.927968 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdc8c\" (UniqueName: \"kubernetes.io/projected/15e99fdb-21ba-4e48-a4d3-6e93f9907413-kube-api-access-sdc8c\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.928039 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-service-ca\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.928071 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-serving-cert\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.928137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-oauth-serving-cert\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.928158 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-oauth-config\") pod \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\" (UID: \"15e99fdb-21ba-4e48-a4d3-6e93f9907413\") " Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.929558 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.929857 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-service-ca" (OuterVolumeSpecName: "service-ca") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.930145 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-config" (OuterVolumeSpecName: "console-config") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.930161 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.934229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e99fdb-21ba-4e48-a4d3-6e93f9907413-kube-api-access-sdc8c" (OuterVolumeSpecName: "kube-api-access-sdc8c") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "kube-api-access-sdc8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.934236 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:26:33 crc kubenswrapper[4833]: I0127 14:26:33.934423 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "15e99fdb-21ba-4e48-a4d3-6e93f9907413" (UID: "15e99fdb-21ba-4e48-a4d3-6e93f9907413"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029421 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdc8c\" (UniqueName: \"kubernetes.io/projected/15e99fdb-21ba-4e48-a4d3-6e93f9907413-kube-api-access-sdc8c\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029515 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029537 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029555 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029574 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029591 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.029609 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15e99fdb-21ba-4e48-a4d3-6e93f9907413-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.401703 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-rrz8c_15e99fdb-21ba-4e48-a4d3-6e93f9907413/console/0.log" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.401972 4833 generic.go:334] "Generic (PLEG): container finished" podID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" containerID="617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26" exitCode=2 Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.402021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rrz8c" event={"ID":"15e99fdb-21ba-4e48-a4d3-6e93f9907413","Type":"ContainerDied","Data":"617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26"} Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.402343 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-rrz8c" event={"ID":"15e99fdb-21ba-4e48-a4d3-6e93f9907413","Type":"ContainerDied","Data":"b042f8ff6c135461120e16f04d5d375d85b6bac698d079a8f35b0477a8012337"} Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.402065 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-rrz8c" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.402466 4833 scope.go:117] "RemoveContainer" containerID="617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.403530 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerID="0d7337a381cfe5227f948129162f38be6292b49ed9c55d6f4bf9d8d308178071" exitCode=0 Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.403560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" event={"ID":"e8423c47-b673-4d7f-ace2-68f58b293b5d","Type":"ContainerDied","Data":"0d7337a381cfe5227f948129162f38be6292b49ed9c55d6f4bf9d8d308178071"} Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.403577 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" event={"ID":"e8423c47-b673-4d7f-ace2-68f58b293b5d","Type":"ContainerStarted","Data":"81ea541dbeb48a27ad43026c77001c472f0ad52ab6e98023e563c3849fe2854f"} Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.420516 4833 scope.go:117] "RemoveContainer" containerID="617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26" Jan 27 14:26:34 crc kubenswrapper[4833]: E0127 14:26:34.422268 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26\": container with ID starting with 617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26 not found: ID does not exist" containerID="617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.422316 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26"} err="failed to get container status \"617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26\": rpc error: code = NotFound desc = could not find container \"617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26\": container with ID starting with 617f96746db56861d1874960b682c919d3bbdcfba213c96d812d68b8ef027d26 not found: ID does not exist" Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.446112 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-rrz8c"] Jan 27 14:26:34 crc kubenswrapper[4833]: I0127 14:26:34.451249 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-rrz8c"] Jan 27 14:26:35 crc kubenswrapper[4833]: I0127 14:26:35.219897 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" path="/var/lib/kubelet/pods/15e99fdb-21ba-4e48-a4d3-6e93f9907413/volumes" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.240153 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nqvkq"] Jan 27 14:26:36 crc kubenswrapper[4833]: E0127 14:26:36.240683 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" containerName="console" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.240695 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" containerName="console" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.240828 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e99fdb-21ba-4e48-a4d3-6e93f9907413" containerName="console" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.241887 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.253237 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqvkq"] Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.357277 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-catalog-content\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.357432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-utilities\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.357493 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrqfp\" (UniqueName: \"kubernetes.io/projected/6b8a17ba-a65e-4da6-a6e9-881fab38f702-kube-api-access-hrqfp\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.419930 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerID="62cb7ad88e4127004232788839be0c98ee8f12d808af276938160c833173198d" exitCode=0 Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.419969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" event={"ID":"e8423c47-b673-4d7f-ace2-68f58b293b5d","Type":"ContainerDied","Data":"62cb7ad88e4127004232788839be0c98ee8f12d808af276938160c833173198d"} Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.459021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-catalog-content\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.459117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-utilities\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.459156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrqfp\" (UniqueName: \"kubernetes.io/projected/6b8a17ba-a65e-4da6-a6e9-881fab38f702-kube-api-access-hrqfp\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.459574 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-catalog-content\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.460173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-utilities\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.482989 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrqfp\" (UniqueName: \"kubernetes.io/projected/6b8a17ba-a65e-4da6-a6e9-881fab38f702-kube-api-access-hrqfp\") pod \"community-operators-nqvkq\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:36 crc kubenswrapper[4833]: I0127 14:26:36.555666 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:37 crc kubenswrapper[4833]: I0127 14:26:37.040282 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nqvkq"] Jan 27 14:26:37 crc kubenswrapper[4833]: I0127 14:26:37.428267 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerID="2697f00b1c23133239a3bc292557f85c20b562edcbd81ea9dc61ca23d6fe2740" exitCode=0 Jan 27 14:26:37 crc kubenswrapper[4833]: I0127 14:26:37.428347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" event={"ID":"e8423c47-b673-4d7f-ace2-68f58b293b5d","Type":"ContainerDied","Data":"2697f00b1c23133239a3bc292557f85c20b562edcbd81ea9dc61ca23d6fe2740"} Jan 27 14:26:37 crc kubenswrapper[4833]: I0127 14:26:37.430055 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerID="d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319" exitCode=0 Jan 27 14:26:37 crc kubenswrapper[4833]: I0127 14:26:37.430113 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqvkq" event={"ID":"6b8a17ba-a65e-4da6-a6e9-881fab38f702","Type":"ContainerDied","Data":"d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319"} Jan 27 14:26:37 crc kubenswrapper[4833]: I0127 14:26:37.430147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqvkq" event={"ID":"6b8a17ba-a65e-4da6-a6e9-881fab38f702","Type":"ContainerStarted","Data":"7eab140bac2ce9f764e1f38c9e3ba50a3a55b491a7206d73f44784a60996a0f8"} Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.729758 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.799948 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-bundle\") pod \"e8423c47-b673-4d7f-ace2-68f58b293b5d\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.800131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-util\") pod \"e8423c47-b673-4d7f-ace2-68f58b293b5d\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.800164 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzsd\" (UniqueName: \"kubernetes.io/projected/e8423c47-b673-4d7f-ace2-68f58b293b5d-kube-api-access-jmzsd\") pod \"e8423c47-b673-4d7f-ace2-68f58b293b5d\" (UID: \"e8423c47-b673-4d7f-ace2-68f58b293b5d\") " Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.804278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-bundle" (OuterVolumeSpecName: "bundle") pod "e8423c47-b673-4d7f-ace2-68f58b293b5d" (UID: "e8423c47-b673-4d7f-ace2-68f58b293b5d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.811702 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8423c47-b673-4d7f-ace2-68f58b293b5d-kube-api-access-jmzsd" (OuterVolumeSpecName: "kube-api-access-jmzsd") pod "e8423c47-b673-4d7f-ace2-68f58b293b5d" (UID: "e8423c47-b673-4d7f-ace2-68f58b293b5d"). InnerVolumeSpecName "kube-api-access-jmzsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.830845 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-util" (OuterVolumeSpecName: "util") pod "e8423c47-b673-4d7f-ace2-68f58b293b5d" (UID: "e8423c47-b673-4d7f-ace2-68f58b293b5d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.901234 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.901274 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8423c47-b673-4d7f-ace2-68f58b293b5d-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:38 crc kubenswrapper[4833]: I0127 14:26:38.901283 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzsd\" (UniqueName: \"kubernetes.io/projected/e8423c47-b673-4d7f-ace2-68f58b293b5d-kube-api-access-jmzsd\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:39 crc kubenswrapper[4833]: I0127 14:26:39.451701 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" event={"ID":"e8423c47-b673-4d7f-ace2-68f58b293b5d","Type":"ContainerDied","Data":"81ea541dbeb48a27ad43026c77001c472f0ad52ab6e98023e563c3849fe2854f"} Jan 27 14:26:39 crc kubenswrapper[4833]: I0127 14:26:39.451761 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w" Jan 27 14:26:39 crc kubenswrapper[4833]: I0127 14:26:39.451773 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ea541dbeb48a27ad43026c77001c472f0ad52ab6e98023e563c3849fe2854f" Jan 27 14:26:39 crc kubenswrapper[4833]: I0127 14:26:39.454176 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerID="e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208" exitCode=0 Jan 27 14:26:39 crc kubenswrapper[4833]: I0127 14:26:39.454213 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqvkq" event={"ID":"6b8a17ba-a65e-4da6-a6e9-881fab38f702","Type":"ContainerDied","Data":"e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208"} Jan 27 14:26:40 crc kubenswrapper[4833]: I0127 14:26:40.462739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqvkq" event={"ID":"6b8a17ba-a65e-4da6-a6e9-881fab38f702","Type":"ContainerStarted","Data":"22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c"} Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.553886 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nqvkq" podStartSLOduration=7.830998821 podStartE2EDuration="10.55387297s" podCreationTimestamp="2026-01-27 14:26:36 +0000 UTC" firstStartedPulling="2026-01-27 14:26:37.431850594 +0000 UTC m=+899.083174996" lastFinishedPulling="2026-01-27 14:26:40.154724743 +0000 UTC m=+901.806049145" observedRunningTime="2026-01-27 14:26:40.493460603 +0000 UTC m=+902.144785005" watchObservedRunningTime="2026-01-27 14:26:46.55387297 +0000 UTC m=+908.205197372" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555061 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw"] Jan 27 14:26:46 crc kubenswrapper[4833]: E0127 14:26:46.555236 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="util" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555245 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="util" Jan 27 14:26:46 crc kubenswrapper[4833]: E0127 14:26:46.555257 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="extract" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555263 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="extract" Jan 27 14:26:46 crc kubenswrapper[4833]: E0127 14:26:46.555272 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="pull" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555277 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="pull" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555381 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8423c47-b673-4d7f-ace2-68f58b293b5d" containerName="extract" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555744 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555799 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.555849 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.560721 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.560742 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.560799 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.561178 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-l59zn" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.561233 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.621265 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw"] Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.622283 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.713275 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d930703-d737-4e5f-bc0d-8458cf05c635-webhook-cert\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.713532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d930703-d737-4e5f-bc0d-8458cf05c635-apiservice-cert\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.713579 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl968\" (UniqueName: \"kubernetes.io/projected/1d930703-d737-4e5f-bc0d-8458cf05c635-kube-api-access-gl968\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.815259 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl968\" (UniqueName: \"kubernetes.io/projected/1d930703-d737-4e5f-bc0d-8458cf05c635-kube-api-access-gl968\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.815353 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d930703-d737-4e5f-bc0d-8458cf05c635-webhook-cert\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.815371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d930703-d737-4e5f-bc0d-8458cf05c635-apiservice-cert\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.822357 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d930703-d737-4e5f-bc0d-8458cf05c635-apiservice-cert\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.831986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d930703-d737-4e5f-bc0d-8458cf05c635-webhook-cert\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.832824 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl968\" (UniqueName: \"kubernetes.io/projected/1d930703-d737-4e5f-bc0d-8458cf05c635-kube-api-access-gl968\") pod \"metallb-operator-controller-manager-6984b8c5f8-9xnnw\" (UID: \"1d930703-d737-4e5f-bc0d-8458cf05c635\") " pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:46 crc kubenswrapper[4833]: I0127 14:26:46.874389 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.014891 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg"] Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.016033 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.023466 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.023613 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.023756 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7prrg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.045730 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg"] Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.119333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvh4\" (UniqueName: \"kubernetes.io/projected/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-kube-api-access-cdvh4\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.119378 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-webhook-cert\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.119425 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-apiservice-cert\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.129550 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw"] Jan 27 14:26:47 crc kubenswrapper[4833]: W0127 14:26:47.135630 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d930703_d737_4e5f_bc0d_8458cf05c635.slice/crio-7b60f9b03df51b3eb9f18a2486d5d47a2bd34b1360164b9f40109565c88c1b5d WatchSource:0}: Error finding container 7b60f9b03df51b3eb9f18a2486d5d47a2bd34b1360164b9f40109565c88c1b5d: Status 404 returned error can't find the container with id 7b60f9b03df51b3eb9f18a2486d5d47a2bd34b1360164b9f40109565c88c1b5d Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.220179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdvh4\" (UniqueName: \"kubernetes.io/projected/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-kube-api-access-cdvh4\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.220233 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-webhook-cert\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.220301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-apiservice-cert\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.226773 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-webhook-cert\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.232092 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-apiservice-cert\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.241172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdvh4\" (UniqueName: \"kubernetes.io/projected/b90aca3e-1c5b-42d9-a04c-a40ac98e9521-kube-api-access-cdvh4\") pod \"metallb-operator-webhook-server-56b8697675-bd2zg\" (UID: \"b90aca3e-1c5b-42d9-a04c-a40ac98e9521\") " pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.363720 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.504191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" event={"ID":"1d930703-d737-4e5f-bc0d-8458cf05c635","Type":"ContainerStarted","Data":"7b60f9b03df51b3eb9f18a2486d5d47a2bd34b1360164b9f40109565c88c1b5d"} Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.565644 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:47 crc kubenswrapper[4833]: I0127 14:26:47.693839 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg"] Jan 27 14:26:47 crc kubenswrapper[4833]: W0127 14:26:47.698994 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb90aca3e_1c5b_42d9_a04c_a40ac98e9521.slice/crio-bc25c74737eede76808143cf6ee4cfea3614afd3d8276c0ba6444a329785d462 WatchSource:0}: Error finding container bc25c74737eede76808143cf6ee4cfea3614afd3d8276c0ba6444a329785d462: Status 404 returned error can't find the container with id bc25c74737eede76808143cf6ee4cfea3614afd3d8276c0ba6444a329785d462 Jan 27 14:26:48 crc kubenswrapper[4833]: I0127 14:26:48.212633 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nqvkq"] Jan 27 14:26:48 crc kubenswrapper[4833]: I0127 14:26:48.511969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" event={"ID":"b90aca3e-1c5b-42d9-a04c-a40ac98e9521","Type":"ContainerStarted","Data":"bc25c74737eede76808143cf6ee4cfea3614afd3d8276c0ba6444a329785d462"} Jan 27 14:26:49 crc kubenswrapper[4833]: I0127 14:26:49.520898 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nqvkq" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="registry-server" containerID="cri-o://22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c" gracePeriod=2 Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.412222 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.530559 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerID="22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c" exitCode=0 Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.530620 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqvkq" event={"ID":"6b8a17ba-a65e-4da6-a6e9-881fab38f702","Type":"ContainerDied","Data":"22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c"} Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.530647 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nqvkq" event={"ID":"6b8a17ba-a65e-4da6-a6e9-881fab38f702","Type":"ContainerDied","Data":"7eab140bac2ce9f764e1f38c9e3ba50a3a55b491a7206d73f44784a60996a0f8"} Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.530663 4833 scope.go:117] "RemoveContainer" containerID="22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.530801 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nqvkq" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.535239 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" event={"ID":"1d930703-d737-4e5f-bc0d-8458cf05c635","Type":"ContainerStarted","Data":"09389288ff59e76c192c8d88790b3fd87da38b36037f109fcbb9af926b6cb885"} Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.535494 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.547976 4833 scope.go:117] "RemoveContainer" containerID="e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.566944 4833 scope.go:117] "RemoveContainer" containerID="d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.574167 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" podStartSLOduration=1.5176236570000001 podStartE2EDuration="4.57414552s" podCreationTimestamp="2026-01-27 14:26:46 +0000 UTC" firstStartedPulling="2026-01-27 14:26:47.13709171 +0000 UTC m=+908.788416122" lastFinishedPulling="2026-01-27 14:26:50.193613583 +0000 UTC m=+911.844937985" observedRunningTime="2026-01-27 14:26:50.565157765 +0000 UTC m=+912.216482157" watchObservedRunningTime="2026-01-27 14:26:50.57414552 +0000 UTC m=+912.225469932" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.588896 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrqfp\" (UniqueName: \"kubernetes.io/projected/6b8a17ba-a65e-4da6-a6e9-881fab38f702-kube-api-access-hrqfp\") pod \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.589007 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-utilities\") pod \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.589030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-catalog-content\") pod \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\" (UID: \"6b8a17ba-a65e-4da6-a6e9-881fab38f702\") " Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.593202 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-utilities" (OuterVolumeSpecName: "utilities") pod "6b8a17ba-a65e-4da6-a6e9-881fab38f702" (UID: "6b8a17ba-a65e-4da6-a6e9-881fab38f702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.598679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8a17ba-a65e-4da6-a6e9-881fab38f702-kube-api-access-hrqfp" (OuterVolumeSpecName: "kube-api-access-hrqfp") pod "6b8a17ba-a65e-4da6-a6e9-881fab38f702" (UID: "6b8a17ba-a65e-4da6-a6e9-881fab38f702"). InnerVolumeSpecName "kube-api-access-hrqfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.602067 4833 scope.go:117] "RemoveContainer" containerID="22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c" Jan 27 14:26:50 crc kubenswrapper[4833]: E0127 14:26:50.615637 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c\": container with ID starting with 22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c not found: ID does not exist" containerID="22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.615971 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c"} err="failed to get container status \"22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c\": rpc error: code = NotFound desc = could not find container \"22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c\": container with ID starting with 22054d5704184cbc7c536f687c8f801d34a04d980f49e9752ad1e6913553b07c not found: ID does not exist" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.615996 4833 scope.go:117] "RemoveContainer" containerID="e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208" Jan 27 14:26:50 crc kubenswrapper[4833]: E0127 14:26:50.616254 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208\": container with ID starting with e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208 not found: ID does not exist" containerID="e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.616273 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208"} err="failed to get container status \"e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208\": rpc error: code = NotFound desc = could not find container \"e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208\": container with ID starting with e0192ba2115c77c0db75b0e6a1caf1d32e84cdeb71e9dc41c2e81b0933875208 not found: ID does not exist" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.616285 4833 scope.go:117] "RemoveContainer" containerID="d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319" Jan 27 14:26:50 crc kubenswrapper[4833]: E0127 14:26:50.616477 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319\": container with ID starting with d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319 not found: ID does not exist" containerID="d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.616494 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319"} err="failed to get container status \"d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319\": rpc error: code = NotFound desc = could not find container \"d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319\": container with ID starting with d8511819a8e63e5aa8ca7038125b7c12517f3415700fcf5a092eb72cccc4a319 not found: ID does not exist" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.652165 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b8a17ba-a65e-4da6-a6e9-881fab38f702" (UID: "6b8a17ba-a65e-4da6-a6e9-881fab38f702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.691968 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.692008 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8a17ba-a65e-4da6-a6e9-881fab38f702-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.692023 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrqfp\" (UniqueName: \"kubernetes.io/projected/6b8a17ba-a65e-4da6-a6e9-881fab38f702-kube-api-access-hrqfp\") on node \"crc\" DevicePath \"\"" Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.857363 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nqvkq"] Jan 27 14:26:50 crc kubenswrapper[4833]: I0127 14:26:50.863621 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nqvkq"] Jan 27 14:26:51 crc kubenswrapper[4833]: I0127 14:26:51.220684 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" path="/var/lib/kubelet/pods/6b8a17ba-a65e-4da6-a6e9-881fab38f702/volumes" Jan 27 14:26:53 crc kubenswrapper[4833]: I0127 14:26:53.554291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" event={"ID":"b90aca3e-1c5b-42d9-a04c-a40ac98e9521","Type":"ContainerStarted","Data":"eb0b94d601670c3ae84dfe929dd311ba4d17be6cfce9336b39feb3807d2786fe"} Jan 27 14:26:53 crc kubenswrapper[4833]: I0127 14:26:53.554551 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:26:53 crc kubenswrapper[4833]: I0127 14:26:53.572668 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" podStartSLOduration=2.412757629 podStartE2EDuration="7.57264974s" podCreationTimestamp="2026-01-27 14:26:46 +0000 UTC" firstStartedPulling="2026-01-27 14:26:47.703384189 +0000 UTC m=+909.354708621" lastFinishedPulling="2026-01-27 14:26:52.86327629 +0000 UTC m=+914.514600732" observedRunningTime="2026-01-27 14:26:53.571358368 +0000 UTC m=+915.222682780" watchObservedRunningTime="2026-01-27 14:26:53.57264974 +0000 UTC m=+915.223974142" Jan 27 14:27:07 crc kubenswrapper[4833]: I0127 14:27:07.368985 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56b8697675-bd2zg" Jan 27 14:27:26 crc kubenswrapper[4833]: I0127 14:27:26.877322 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6984b8c5f8-9xnnw" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.694271 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tvh76"] Jan 27 14:27:27 crc kubenswrapper[4833]: E0127 14:27:27.694888 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="extract-utilities" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.694994 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="extract-utilities" Jan 27 14:27:27 crc kubenswrapper[4833]: E0127 14:27:27.695086 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="extract-content" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.695160 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="extract-content" Jan 27 14:27:27 crc kubenswrapper[4833]: E0127 14:27:27.695242 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="registry-server" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.695313 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="registry-server" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.695539 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8a17ba-a65e-4da6-a6e9-881fab38f702" containerName="registry-server" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.698577 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.699743 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8"] Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.700652 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.706070 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-j8hll" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.706380 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.706762 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.717231 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8"] Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.720162 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.817886 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-s7g75"] Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.818892 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-s7g75" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.822902 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.823804 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.824077 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.825459 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-slfjb" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828120 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-conf\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-reloader\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828177 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcm4\" (UniqueName: \"kubernetes.io/projected/08cd7e0d-fb76-4bad-b82f-1ce499053722-kube-api-access-tfcm4\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djxn\" (UniqueName: \"kubernetes.io/projected/1ad3206d-30a8-4d32-9e9f-b6e04c016001-kube-api-access-2djxn\") pod \"frr-k8s-webhook-server-7df86c4f6c-lxtf8\" (UID: \"1ad3206d-30a8-4d32-9e9f-b6e04c016001\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ad3206d-30a8-4d32-9e9f-b6e04c016001-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lxtf8\" (UID: \"1ad3206d-30a8-4d32-9e9f-b6e04c016001\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828252 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-sockets\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-startup\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08cd7e0d-fb76-4bad-b82f-1ce499053722-metrics-certs\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.828324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-metrics\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.837499 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-swvk9"] Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.838460 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.840046 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.852157 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-swvk9"] Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2j88\" (UniqueName: \"kubernetes.io/projected/312df926-5f4b-4dee-b49f-ab00e0748a8d-kube-api-access-c2j88\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929218 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ad3206d-30a8-4d32-9e9f-b6e04c016001-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lxtf8\" (UID: \"1ad3206d-30a8-4d32-9e9f-b6e04c016001\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88a8bf6c-e725-44ef-8a09-03ff49fa1546-metrics-certs\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-sockets\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929283 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929366 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-startup\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929459 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/88a8bf6c-e725-44ef-8a09-03ff49fa1546-kube-api-access-4lspp\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929506 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08cd7e0d-fb76-4bad-b82f-1ce499053722-metrics-certs\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929522 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-metrics\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/312df926-5f4b-4dee-b49f-ab00e0748a8d-metallb-excludel2\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-conf\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-reloader\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929603 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfcm4\" (UniqueName: \"kubernetes.io/projected/08cd7e0d-fb76-4bad-b82f-1ce499053722-kube-api-access-tfcm4\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-metrics-certs\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929639 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djxn\" (UniqueName: \"kubernetes.io/projected/1ad3206d-30a8-4d32-9e9f-b6e04c016001-kube-api-access-2djxn\") pod \"frr-k8s-webhook-server-7df86c4f6c-lxtf8\" (UID: \"1ad3206d-30a8-4d32-9e9f-b6e04c016001\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929657 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88a8bf6c-e725-44ef-8a09-03ff49fa1546-cert\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.929945 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-reloader\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.930105 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-conf\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.930255 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-startup\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.930485 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-metrics\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.932752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/08cd7e0d-fb76-4bad-b82f-1ce499053722-frr-sockets\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.935231 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ad3206d-30a8-4d32-9e9f-b6e04c016001-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-lxtf8\" (UID: \"1ad3206d-30a8-4d32-9e9f-b6e04c016001\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.936915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08cd7e0d-fb76-4bad-b82f-1ce499053722-metrics-certs\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.955561 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djxn\" (UniqueName: \"kubernetes.io/projected/1ad3206d-30a8-4d32-9e9f-b6e04c016001-kube-api-access-2djxn\") pod \"frr-k8s-webhook-server-7df86c4f6c-lxtf8\" (UID: \"1ad3206d-30a8-4d32-9e9f-b6e04c016001\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:27 crc kubenswrapper[4833]: I0127 14:27:27.959411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfcm4\" (UniqueName: \"kubernetes.io/projected/08cd7e0d-fb76-4bad-b82f-1ce499053722-kube-api-access-tfcm4\") pod \"frr-k8s-tvh76\" (UID: \"08cd7e0d-fb76-4bad-b82f-1ce499053722\") " pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.018832 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.030632 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.031211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88a8bf6c-e725-44ef-8a09-03ff49fa1546-cert\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.031405 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2j88\" (UniqueName: \"kubernetes.io/projected/312df926-5f4b-4dee-b49f-ab00e0748a8d-kube-api-access-c2j88\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.031558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88a8bf6c-e725-44ef-8a09-03ff49fa1546-metrics-certs\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.031649 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: E0127 14:27:28.031752 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:27:28 crc kubenswrapper[4833]: E0127 14:27:28.031817 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist podName:312df926-5f4b-4dee-b49f-ab00e0748a8d nodeName:}" failed. No retries permitted until 2026-01-27 14:27:28.531797208 +0000 UTC m=+950.183121710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist") pod "speaker-s7g75" (UID: "312df926-5f4b-4dee-b49f-ab00e0748a8d") : secret "metallb-memberlist" not found Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.031770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/88a8bf6c-e725-44ef-8a09-03ff49fa1546-kube-api-access-4lspp\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.032038 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/312df926-5f4b-4dee-b49f-ab00e0748a8d-metallb-excludel2\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.032169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-metrics-certs\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.032957 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/312df926-5f4b-4dee-b49f-ab00e0748a8d-metallb-excludel2\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.035465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-metrics-certs\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.035490 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88a8bf6c-e725-44ef-8a09-03ff49fa1546-metrics-certs\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.037193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/88a8bf6c-e725-44ef-8a09-03ff49fa1546-cert\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.047491 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2j88\" (UniqueName: \"kubernetes.io/projected/312df926-5f4b-4dee-b49f-ab00e0748a8d-kube-api-access-c2j88\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.060188 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lspp\" (UniqueName: \"kubernetes.io/projected/88a8bf6c-e725-44ef-8a09-03ff49fa1546-kube-api-access-4lspp\") pod \"controller-6968d8fdc4-swvk9\" (UID: \"88a8bf6c-e725-44ef-8a09-03ff49fa1546\") " pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.152769 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.287706 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8"] Jan 27 14:27:28 crc kubenswrapper[4833]: W0127 14:27:28.291496 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ad3206d_30a8_4d32_9e9f_b6e04c016001.slice/crio-de3e6ff5df8b8711079ffa12d1445b69485edae0c42924129de405886c3ae5f9 WatchSource:0}: Error finding container de3e6ff5df8b8711079ffa12d1445b69485edae0c42924129de405886c3ae5f9: Status 404 returned error can't find the container with id de3e6ff5df8b8711079ffa12d1445b69485edae0c42924129de405886c3ae5f9 Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.378173 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-swvk9"] Jan 27 14:27:28 crc kubenswrapper[4833]: W0127 14:27:28.381035 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88a8bf6c_e725_44ef_8a09_03ff49fa1546.slice/crio-0af89c43202bc0c34fe7ece29bcc57bc10b910ae7a2a1af606cdae749c903b19 WatchSource:0}: Error finding container 0af89c43202bc0c34fe7ece29bcc57bc10b910ae7a2a1af606cdae749c903b19: Status 404 returned error can't find the container with id 0af89c43202bc0c34fe7ece29bcc57bc10b910ae7a2a1af606cdae749c903b19 Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.545789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:28 crc kubenswrapper[4833]: E0127 14:27:28.546214 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:27:28 crc kubenswrapper[4833]: E0127 14:27:28.546664 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist podName:312df926-5f4b-4dee-b49f-ab00e0748a8d nodeName:}" failed. No retries permitted until 2026-01-27 14:27:29.546644418 +0000 UTC m=+951.197968820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist") pod "speaker-s7g75" (UID: "312df926-5f4b-4dee-b49f-ab00e0748a8d") : secret "metallb-memberlist" not found Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.788505 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" event={"ID":"1ad3206d-30a8-4d32-9e9f-b6e04c016001","Type":"ContainerStarted","Data":"de3e6ff5df8b8711079ffa12d1445b69485edae0c42924129de405886c3ae5f9"} Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.789987 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-swvk9" event={"ID":"88a8bf6c-e725-44ef-8a09-03ff49fa1546","Type":"ContainerStarted","Data":"8b03b92031e5cacc698e0e1c053f0d948894f73278a3eac1930e7248438afce1"} Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.790010 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-swvk9" event={"ID":"88a8bf6c-e725-44ef-8a09-03ff49fa1546","Type":"ContainerStarted","Data":"0389c2ae9c45eb302cbfa62a6deeefc5bc408d0e005538562d955f3f2faf115f"} Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.790020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-swvk9" event={"ID":"88a8bf6c-e725-44ef-8a09-03ff49fa1546","Type":"ContainerStarted","Data":"0af89c43202bc0c34fe7ece29bcc57bc10b910ae7a2a1af606cdae749c903b19"} Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.790143 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.790766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"bf146a085f5f127bad333f54051d704dd9e1290e6d11a4695a7c38c4cb2a1e0c"} Jan 27 14:27:28 crc kubenswrapper[4833]: I0127 14:27:28.811230 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-swvk9" podStartSLOduration=1.8112077210000002 podStartE2EDuration="1.811207721s" podCreationTimestamp="2026-01-27 14:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:27:28.805748274 +0000 UTC m=+950.457072696" watchObservedRunningTime="2026-01-27 14:27:28.811207721 +0000 UTC m=+950.462532123" Jan 27 14:27:29 crc kubenswrapper[4833]: I0127 14:27:29.566293 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:29 crc kubenswrapper[4833]: I0127 14:27:29.572167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/312df926-5f4b-4dee-b49f-ab00e0748a8d-memberlist\") pod \"speaker-s7g75\" (UID: \"312df926-5f4b-4dee-b49f-ab00e0748a8d\") " pod="metallb-system/speaker-s7g75" Jan 27 14:27:29 crc kubenswrapper[4833]: I0127 14:27:29.632223 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-s7g75" Jan 27 14:27:29 crc kubenswrapper[4833]: W0127 14:27:29.659802 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312df926_5f4b_4dee_b49f_ab00e0748a8d.slice/crio-7b197dc2977098f5c92b97a6e7f25514f73533e5408ef934c6603f9ba62025fd WatchSource:0}: Error finding container 7b197dc2977098f5c92b97a6e7f25514f73533e5408ef934c6603f9ba62025fd: Status 404 returned error can't find the container with id 7b197dc2977098f5c92b97a6e7f25514f73533e5408ef934c6603f9ba62025fd Jan 27 14:27:29 crc kubenswrapper[4833]: I0127 14:27:29.799095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-s7g75" event={"ID":"312df926-5f4b-4dee-b49f-ab00e0748a8d","Type":"ContainerStarted","Data":"7b197dc2977098f5c92b97a6e7f25514f73533e5408ef934c6603f9ba62025fd"} Jan 27 14:27:30 crc kubenswrapper[4833]: I0127 14:27:30.810179 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-s7g75" event={"ID":"312df926-5f4b-4dee-b49f-ab00e0748a8d","Type":"ContainerStarted","Data":"bcc2fb6c7aa7735c101100f5b884c2fce266eba6c93bd789cd2c2aa47cdd159f"} Jan 27 14:27:30 crc kubenswrapper[4833]: I0127 14:27:30.810496 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-s7g75" Jan 27 14:27:30 crc kubenswrapper[4833]: I0127 14:27:30.810507 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-s7g75" event={"ID":"312df926-5f4b-4dee-b49f-ab00e0748a8d","Type":"ContainerStarted","Data":"b7215b8840381116c6d3a9a90030ff7859390bf4c98693ba9132b922ed53923a"} Jan 27 14:27:30 crc kubenswrapper[4833]: I0127 14:27:30.834326 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-s7g75" podStartSLOduration=3.834308541 podStartE2EDuration="3.834308541s" podCreationTimestamp="2026-01-27 14:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:27:30.829970072 +0000 UTC m=+952.481294474" watchObservedRunningTime="2026-01-27 14:27:30.834308541 +0000 UTC m=+952.485632943" Jan 27 14:27:32 crc kubenswrapper[4833]: I0127 14:27:32.260921 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:27:32 crc kubenswrapper[4833]: I0127 14:27:32.260999 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:27:35 crc kubenswrapper[4833]: I0127 14:27:35.842883 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" event={"ID":"1ad3206d-30a8-4d32-9e9f-b6e04c016001","Type":"ContainerStarted","Data":"010b7908305483a6b6fe1a22646aa02549c830a88ee46cc0e66db5df2c1b569c"} Jan 27 14:27:35 crc kubenswrapper[4833]: I0127 14:27:35.843532 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:35 crc kubenswrapper[4833]: I0127 14:27:35.844970 4833 generic.go:334] "Generic (PLEG): container finished" podID="08cd7e0d-fb76-4bad-b82f-1ce499053722" containerID="99cb37af31475ac30d74cffc7f9fc7d8a5eb84ac0d5d3e1088245ac3becf5c67" exitCode=0 Jan 27 14:27:35 crc kubenswrapper[4833]: I0127 14:27:35.845045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerDied","Data":"99cb37af31475ac30d74cffc7f9fc7d8a5eb84ac0d5d3e1088245ac3becf5c67"} Jan 27 14:27:35 crc kubenswrapper[4833]: I0127 14:27:35.879094 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" podStartSLOduration=1.571247284 podStartE2EDuration="8.87906786s" podCreationTimestamp="2026-01-27 14:27:27 +0000 UTC" firstStartedPulling="2026-01-27 14:27:28.293421218 +0000 UTC m=+949.944745620" lastFinishedPulling="2026-01-27 14:27:35.601241794 +0000 UTC m=+957.252566196" observedRunningTime="2026-01-27 14:27:35.868931936 +0000 UTC m=+957.520256338" watchObservedRunningTime="2026-01-27 14:27:35.87906786 +0000 UTC m=+957.530392262" Jan 27 14:27:36 crc kubenswrapper[4833]: I0127 14:27:36.856290 4833 generic.go:334] "Generic (PLEG): container finished" podID="08cd7e0d-fb76-4bad-b82f-1ce499053722" containerID="37d2c32486ceb0e02252bec48024d70ae525f350fff8a2b91fd820e391c6bfbb" exitCode=0 Jan 27 14:27:36 crc kubenswrapper[4833]: I0127 14:27:36.856372 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerDied","Data":"37d2c32486ceb0e02252bec48024d70ae525f350fff8a2b91fd820e391c6bfbb"} Jan 27 14:27:37 crc kubenswrapper[4833]: I0127 14:27:37.867742 4833 generic.go:334] "Generic (PLEG): container finished" podID="08cd7e0d-fb76-4bad-b82f-1ce499053722" containerID="a6b2af9ef4f8673f8811685bdba2d3c67c36f51d8d22096113ac884c53ffdac5" exitCode=0 Jan 27 14:27:37 crc kubenswrapper[4833]: I0127 14:27:37.868064 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerDied","Data":"a6b2af9ef4f8673f8811685bdba2d3c67c36f51d8d22096113ac884c53ffdac5"} Jan 27 14:27:38 crc kubenswrapper[4833]: I0127 14:27:38.160825 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-swvk9" Jan 27 14:27:38 crc kubenswrapper[4833]: I0127 14:27:38.922871 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"34747cce4d8f03c4cb663727e9d13ad57547fe31ebf867b3fdfc0cece4d41e36"} Jan 27 14:27:38 crc kubenswrapper[4833]: I0127 14:27:38.923178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"685a84d5c9cf2a8add47e2174ab310ba8075f3bc09c67f466718e9607e1e231c"} Jan 27 14:27:38 crc kubenswrapper[4833]: I0127 14:27:38.923188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"243340484eda54a8aecf70783881527aa85fe187e8b1d12a89e4a7d9d5d1fac7"} Jan 27 14:27:38 crc kubenswrapper[4833]: I0127 14:27:38.923197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"9de2591ce8dfdf092780573bb04f192ee29ce972ecfe21b7780fcd811914acc8"} Jan 27 14:27:38 crc kubenswrapper[4833]: I0127 14:27:38.923206 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"ad27bd735ea96fa7ebdb8dc0db0628a6fbaf923af63dd48a7ce8c5661da5a18b"} Jan 27 14:27:39 crc kubenswrapper[4833]: I0127 14:27:39.637192 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-s7g75" Jan 27 14:27:39 crc kubenswrapper[4833]: I0127 14:27:39.935534 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tvh76" event={"ID":"08cd7e0d-fb76-4bad-b82f-1ce499053722","Type":"ContainerStarted","Data":"acc134deeb1ab98d2a7f2b3741ba3575389f65ec8824ff156752f4606a477a87"} Jan 27 14:27:39 crc kubenswrapper[4833]: I0127 14:27:39.935977 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:39 crc kubenswrapper[4833]: I0127 14:27:39.975408 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tvh76" podStartSLOduration=5.584506308 podStartE2EDuration="12.975380343s" podCreationTimestamp="2026-01-27 14:27:27 +0000 UTC" firstStartedPulling="2026-01-27 14:27:28.22601125 +0000 UTC m=+949.877335652" lastFinishedPulling="2026-01-27 14:27:35.616885285 +0000 UTC m=+957.268209687" observedRunningTime="2026-01-27 14:27:39.971196969 +0000 UTC m=+961.622521401" watchObservedRunningTime="2026-01-27 14:27:39.975380343 +0000 UTC m=+961.626704785" Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.917313 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8xvfn"] Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.919489 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.929022 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.929482 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.933752 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-blthv" Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.936467 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8xvfn"] Jan 27 14:27:42 crc kubenswrapper[4833]: I0127 14:27:42.975264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrpp\" (UniqueName: \"kubernetes.io/projected/6be603cf-5de7-4536-901e-ed0c5e9733e1-kube-api-access-nwrpp\") pod \"openstack-operator-index-8xvfn\" (UID: \"6be603cf-5de7-4536-901e-ed0c5e9733e1\") " pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.020071 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.052894 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.076139 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrpp\" (UniqueName: \"kubernetes.io/projected/6be603cf-5de7-4536-901e-ed0c5e9733e1-kube-api-access-nwrpp\") pod \"openstack-operator-index-8xvfn\" (UID: \"6be603cf-5de7-4536-901e-ed0c5e9733e1\") " pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.097822 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrpp\" (UniqueName: \"kubernetes.io/projected/6be603cf-5de7-4536-901e-ed0c5e9733e1-kube-api-access-nwrpp\") pod \"openstack-operator-index-8xvfn\" (UID: \"6be603cf-5de7-4536-901e-ed0c5e9733e1\") " pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.250603 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.722147 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8xvfn"] Jan 27 14:27:43 crc kubenswrapper[4833]: I0127 14:27:43.977943 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8xvfn" event={"ID":"6be603cf-5de7-4536-901e-ed0c5e9733e1","Type":"ContainerStarted","Data":"7c9204c087b9a2ef766926b70f9fa91828036421d9ffa0f5e97a55b17d9d65fb"} Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.002243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8xvfn" event={"ID":"6be603cf-5de7-4536-901e-ed0c5e9733e1","Type":"ContainerStarted","Data":"70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08"} Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.032868 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8xvfn" podStartSLOduration=2.498635758 podStartE2EDuration="4.032842307s" podCreationTimestamp="2026-01-27 14:27:42 +0000 UTC" firstStartedPulling="2026-01-27 14:27:43.732412164 +0000 UTC m=+965.383736586" lastFinishedPulling="2026-01-27 14:27:45.266618733 +0000 UTC m=+966.917943135" observedRunningTime="2026-01-27 14:27:46.027592055 +0000 UTC m=+967.678916557" watchObservedRunningTime="2026-01-27 14:27:46.032842307 +0000 UTC m=+967.684166739" Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.276968 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8xvfn"] Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.879119 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jbnzz"] Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.880305 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.893755 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jbnzz"] Jan 27 14:27:46 crc kubenswrapper[4833]: I0127 14:27:46.946156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tfn5\" (UniqueName: \"kubernetes.io/projected/d71f211e-9209-4c1c-891c-dd802162ec4a-kube-api-access-5tfn5\") pod \"openstack-operator-index-jbnzz\" (UID: \"d71f211e-9209-4c1c-891c-dd802162ec4a\") " pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:47 crc kubenswrapper[4833]: I0127 14:27:47.048081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tfn5\" (UniqueName: \"kubernetes.io/projected/d71f211e-9209-4c1c-891c-dd802162ec4a-kube-api-access-5tfn5\") pod \"openstack-operator-index-jbnzz\" (UID: \"d71f211e-9209-4c1c-891c-dd802162ec4a\") " pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:47 crc kubenswrapper[4833]: I0127 14:27:47.084189 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tfn5\" (UniqueName: \"kubernetes.io/projected/d71f211e-9209-4c1c-891c-dd802162ec4a-kube-api-access-5tfn5\") pod \"openstack-operator-index-jbnzz\" (UID: \"d71f211e-9209-4c1c-891c-dd802162ec4a\") " pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:47 crc kubenswrapper[4833]: I0127 14:27:47.241536 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:47 crc kubenswrapper[4833]: I0127 14:27:47.693497 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jbnzz"] Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.018270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jbnzz" event={"ID":"d71f211e-9209-4c1c-891c-dd802162ec4a","Type":"ContainerStarted","Data":"623a80c96783242f48793bcf9a37053b4717c26b2ffb08dd5f4de359be4af16e"} Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.018773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jbnzz" event={"ID":"d71f211e-9209-4c1c-891c-dd802162ec4a","Type":"ContainerStarted","Data":"2181a248267738bf0bf63a0d8d4810584b16280eaa59c4a3b39d56044457cd14"} Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.018329 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8xvfn" podUID="6be603cf-5de7-4536-901e-ed0c5e9733e1" containerName="registry-server" containerID="cri-o://70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08" gracePeriod=2 Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.028911 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tvh76" Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.042409 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-lxtf8" Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.089335 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jbnzz" podStartSLOduration=2.041295719 podStartE2EDuration="2.089306061s" podCreationTimestamp="2026-01-27 14:27:46 +0000 UTC" firstStartedPulling="2026-01-27 14:27:47.706695822 +0000 UTC m=+969.358020234" lastFinishedPulling="2026-01-27 14:27:47.754706164 +0000 UTC m=+969.406030576" observedRunningTime="2026-01-27 14:27:48.048434098 +0000 UTC m=+969.699758540" watchObservedRunningTime="2026-01-27 14:27:48.089306061 +0000 UTC m=+969.740630503" Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.444859 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.575588 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrpp\" (UniqueName: \"kubernetes.io/projected/6be603cf-5de7-4536-901e-ed0c5e9733e1-kube-api-access-nwrpp\") pod \"6be603cf-5de7-4536-901e-ed0c5e9733e1\" (UID: \"6be603cf-5de7-4536-901e-ed0c5e9733e1\") " Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.583510 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be603cf-5de7-4536-901e-ed0c5e9733e1-kube-api-access-nwrpp" (OuterVolumeSpecName: "kube-api-access-nwrpp") pod "6be603cf-5de7-4536-901e-ed0c5e9733e1" (UID: "6be603cf-5de7-4536-901e-ed0c5e9733e1"). InnerVolumeSpecName "kube-api-access-nwrpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:27:48 crc kubenswrapper[4833]: I0127 14:27:48.677383 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrpp\" (UniqueName: \"kubernetes.io/projected/6be603cf-5de7-4536-901e-ed0c5e9733e1-kube-api-access-nwrpp\") on node \"crc\" DevicePath \"\"" Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.032096 4833 generic.go:334] "Generic (PLEG): container finished" podID="6be603cf-5de7-4536-901e-ed0c5e9733e1" containerID="70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08" exitCode=0 Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.032226 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8xvfn" Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.032264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8xvfn" event={"ID":"6be603cf-5de7-4536-901e-ed0c5e9733e1","Type":"ContainerDied","Data":"70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08"} Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.033500 4833 scope.go:117] "RemoveContainer" containerID="70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08" Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.033439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8xvfn" event={"ID":"6be603cf-5de7-4536-901e-ed0c5e9733e1","Type":"ContainerDied","Data":"7c9204c087b9a2ef766926b70f9fa91828036421d9ffa0f5e97a55b17d9d65fb"} Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.062440 4833 scope.go:117] "RemoveContainer" containerID="70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08" Jan 27 14:27:49 crc kubenswrapper[4833]: E0127 14:27:49.063144 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08\": container with ID starting with 70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08 not found: ID does not exist" containerID="70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08" Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.063221 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08"} err="failed to get container status \"70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08\": rpc error: code = NotFound desc = could not find container \"70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08\": container with ID starting with 70260a3b2637a68c524fd5b69951b28ea127a5c831c2cce79eea648412939e08 not found: ID does not exist" Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.075419 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8xvfn"] Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.081260 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8xvfn"] Jan 27 14:27:49 crc kubenswrapper[4833]: I0127 14:27:49.226996 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be603cf-5de7-4536-901e-ed0c5e9733e1" path="/var/lib/kubelet/pods/6be603cf-5de7-4536-901e-ed0c5e9733e1/volumes" Jan 27 14:27:57 crc kubenswrapper[4833]: I0127 14:27:57.242437 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:57 crc kubenswrapper[4833]: I0127 14:27:57.243192 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:57 crc kubenswrapper[4833]: I0127 14:27:57.287868 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:27:58 crc kubenswrapper[4833]: I0127 14:27:58.141901 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jbnzz" Jan 27 14:28:02 crc kubenswrapper[4833]: I0127 14:28:02.261502 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:28:02 crc kubenswrapper[4833]: I0127 14:28:02.261759 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:28:03 crc kubenswrapper[4833]: I0127 14:28:03.888499 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m"] Jan 27 14:28:03 crc kubenswrapper[4833]: E0127 14:28:03.888745 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be603cf-5de7-4536-901e-ed0c5e9733e1" containerName="registry-server" Jan 27 14:28:03 crc kubenswrapper[4833]: I0127 14:28:03.888756 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be603cf-5de7-4536-901e-ed0c5e9733e1" containerName="registry-server" Jan 27 14:28:03 crc kubenswrapper[4833]: I0127 14:28:03.888864 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be603cf-5de7-4536-901e-ed0c5e9733e1" containerName="registry-server" Jan 27 14:28:03 crc kubenswrapper[4833]: I0127 14:28:03.889663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:03 crc kubenswrapper[4833]: I0127 14:28:03.891824 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ffhfl" Jan 27 14:28:03 crc kubenswrapper[4833]: I0127 14:28:03.902755 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m"] Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.004640 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-util\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.004698 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-bundle\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.004758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlrpw\" (UniqueName: \"kubernetes.io/projected/f63f4475-df09-4e45-b77e-4f498ea12af7-kube-api-access-qlrpw\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.105812 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-util\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.105882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-bundle\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.105959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlrpw\" (UniqueName: \"kubernetes.io/projected/f63f4475-df09-4e45-b77e-4f498ea12af7-kube-api-access-qlrpw\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.106919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-util\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.107145 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-bundle\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.126220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlrpw\" (UniqueName: \"kubernetes.io/projected/f63f4475-df09-4e45-b77e-4f498ea12af7-kube-api-access-qlrpw\") pod \"0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.214303 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:04 crc kubenswrapper[4833]: I0127 14:28:04.673406 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m"] Jan 27 14:28:05 crc kubenswrapper[4833]: I0127 14:28:05.164067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" event={"ID":"f63f4475-df09-4e45-b77e-4f498ea12af7","Type":"ContainerStarted","Data":"326632cb451afb864c6d44f2a1a7f33cd8390d33551032d02c966162fdfc2971"} Jan 27 14:28:06 crc kubenswrapper[4833]: I0127 14:28:06.175703 4833 generic.go:334] "Generic (PLEG): container finished" podID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerID="15cc568994f703a9e2a731df96fd9b1f9395eb5564aad58a101b63c4356c100c" exitCode=0 Jan 27 14:28:06 crc kubenswrapper[4833]: I0127 14:28:06.175816 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" event={"ID":"f63f4475-df09-4e45-b77e-4f498ea12af7","Type":"ContainerDied","Data":"15cc568994f703a9e2a731df96fd9b1f9395eb5564aad58a101b63c4356c100c"} Jan 27 14:28:07 crc kubenswrapper[4833]: I0127 14:28:07.185883 4833 generic.go:334] "Generic (PLEG): container finished" podID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerID="fd03f9d9b6a71ac739957868961f02074fc742058d02becc4f7ffa9f4e42919c" exitCode=0 Jan 27 14:28:07 crc kubenswrapper[4833]: I0127 14:28:07.185979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" event={"ID":"f63f4475-df09-4e45-b77e-4f498ea12af7","Type":"ContainerDied","Data":"fd03f9d9b6a71ac739957868961f02074fc742058d02becc4f7ffa9f4e42919c"} Jan 27 14:28:08 crc kubenswrapper[4833]: I0127 14:28:08.199731 4833 generic.go:334] "Generic (PLEG): container finished" podID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerID="1dceccb1cbb2a60736031ec5060d51592e2e0e53b9a31e12172c01edf876eeaa" exitCode=0 Jan 27 14:28:08 crc kubenswrapper[4833]: I0127 14:28:08.199822 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" event={"ID":"f63f4475-df09-4e45-b77e-4f498ea12af7","Type":"ContainerDied","Data":"1dceccb1cbb2a60736031ec5060d51592e2e0e53b9a31e12172c01edf876eeaa"} Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.441822 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.589399 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-bundle\") pod \"f63f4475-df09-4e45-b77e-4f498ea12af7\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.589530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-util\") pod \"f63f4475-df09-4e45-b77e-4f498ea12af7\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.589648 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlrpw\" (UniqueName: \"kubernetes.io/projected/f63f4475-df09-4e45-b77e-4f498ea12af7-kube-api-access-qlrpw\") pod \"f63f4475-df09-4e45-b77e-4f498ea12af7\" (UID: \"f63f4475-df09-4e45-b77e-4f498ea12af7\") " Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.590238 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-bundle" (OuterVolumeSpecName: "bundle") pod "f63f4475-df09-4e45-b77e-4f498ea12af7" (UID: "f63f4475-df09-4e45-b77e-4f498ea12af7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.596040 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63f4475-df09-4e45-b77e-4f498ea12af7-kube-api-access-qlrpw" (OuterVolumeSpecName: "kube-api-access-qlrpw") pod "f63f4475-df09-4e45-b77e-4f498ea12af7" (UID: "f63f4475-df09-4e45-b77e-4f498ea12af7"). InnerVolumeSpecName "kube-api-access-qlrpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.607010 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-util" (OuterVolumeSpecName: "util") pod "f63f4475-df09-4e45-b77e-4f498ea12af7" (UID: "f63f4475-df09-4e45-b77e-4f498ea12af7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.690604 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.690628 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlrpw\" (UniqueName: \"kubernetes.io/projected/f63f4475-df09-4e45-b77e-4f498ea12af7-kube-api-access-qlrpw\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:09 crc kubenswrapper[4833]: I0127 14:28:09.690640 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f63f4475-df09-4e45-b77e-4f498ea12af7-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:28:10 crc kubenswrapper[4833]: I0127 14:28:10.217751 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" event={"ID":"f63f4475-df09-4e45-b77e-4f498ea12af7","Type":"ContainerDied","Data":"326632cb451afb864c6d44f2a1a7f33cd8390d33551032d02c966162fdfc2971"} Jan 27 14:28:10 crc kubenswrapper[4833]: I0127 14:28:10.217783 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="326632cb451afb864c6d44f2a1a7f33cd8390d33551032d02c966162fdfc2971" Jan 27 14:28:10 crc kubenswrapper[4833]: I0127 14:28:10.217797 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.532865 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq"] Jan 27 14:28:16 crc kubenswrapper[4833]: E0127 14:28:16.533629 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="extract" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.533643 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="extract" Jan 27 14:28:16 crc kubenswrapper[4833]: E0127 14:28:16.533655 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="util" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.533661 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="util" Jan 27 14:28:16 crc kubenswrapper[4833]: E0127 14:28:16.533675 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="pull" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.533685 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="pull" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.533794 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63f4475-df09-4e45-b77e-4f498ea12af7" containerName="extract" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.534230 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.536424 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5pgz8" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.556435 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq"] Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.693309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2gr7\" (UniqueName: \"kubernetes.io/projected/6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd-kube-api-access-n2gr7\") pod \"openstack-operator-controller-init-77d48dd9c-9hsqq\" (UID: \"6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd\") " pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.794228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2gr7\" (UniqueName: \"kubernetes.io/projected/6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd-kube-api-access-n2gr7\") pod \"openstack-operator-controller-init-77d48dd9c-9hsqq\" (UID: \"6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd\") " pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.814623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2gr7\" (UniqueName: \"kubernetes.io/projected/6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd-kube-api-access-n2gr7\") pod \"openstack-operator-controller-init-77d48dd9c-9hsqq\" (UID: \"6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd\") " pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:16 crc kubenswrapper[4833]: I0127 14:28:16.855346 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:17 crc kubenswrapper[4833]: I0127 14:28:17.324981 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq"] Jan 27 14:28:18 crc kubenswrapper[4833]: I0127 14:28:18.292381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" event={"ID":"6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd","Type":"ContainerStarted","Data":"8066571dd162686caca6ec823693830d53669aa22bc7f883e2088a48d4d45e37"} Jan 27 14:28:21 crc kubenswrapper[4833]: I0127 14:28:21.316949 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" event={"ID":"6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd","Type":"ContainerStarted","Data":"5aff666201cc7481faa56a210ed4a88a09321e4b15e6004b0beb2c7308065a39"} Jan 27 14:28:21 crc kubenswrapper[4833]: I0127 14:28:21.317550 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:21 crc kubenswrapper[4833]: I0127 14:28:21.361216 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" podStartSLOduration=1.7567215680000001 podStartE2EDuration="5.361190947s" podCreationTimestamp="2026-01-27 14:28:16 +0000 UTC" firstStartedPulling="2026-01-27 14:28:17.331727252 +0000 UTC m=+998.983051674" lastFinishedPulling="2026-01-27 14:28:20.936196651 +0000 UTC m=+1002.587521053" observedRunningTime="2026-01-27 14:28:21.353951855 +0000 UTC m=+1003.005276297" watchObservedRunningTime="2026-01-27 14:28:21.361190947 +0000 UTC m=+1003.012515359" Jan 27 14:28:26 crc kubenswrapper[4833]: I0127 14:28:26.858419 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-77d48dd9c-9hsqq" Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.260668 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.261815 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.261959 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.262732 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40187fec1df7ede27c8027d4dc094283cfd4a32e465d547b8f9dfc102b7b849f"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.262926 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://40187fec1df7ede27c8027d4dc094283cfd4a32e465d547b8f9dfc102b7b849f" gracePeriod=600 Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.399213 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="40187fec1df7ede27c8027d4dc094283cfd4a32e465d547b8f9dfc102b7b849f" exitCode=0 Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.399282 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"40187fec1df7ede27c8027d4dc094283cfd4a32e465d547b8f9dfc102b7b849f"} Jan 27 14:28:32 crc kubenswrapper[4833]: I0127 14:28:32.399654 4833 scope.go:117] "RemoveContainer" containerID="8a75584c2330835076c24d4403b17ee7a9367704e37be7e4c860bbe6180771fd" Jan 27 14:28:33 crc kubenswrapper[4833]: I0127 14:28:33.429168 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"1805c559dece1ffe1bcc960333ee27cf010cdc7a9c45dfb4f0b8b1c23725f37b"} Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.039689 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.041207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.045163 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-57p2z" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.062026 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.100308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zhs\" (UniqueName: \"kubernetes.io/projected/e74c1bee-e2b8-4b35-8ced-7832d9c1a824-kube-api-access-m6zhs\") pod \"barbican-operator-controller-manager-7f86f8796f-kp86b\" (UID: \"e74c1bee-e2b8-4b35-8ced-7832d9c1a824\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.100403 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.101185 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.104915 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-w2wxc" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.112672 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.137105 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.137900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.143785 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.162612 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.163346 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cw2jf" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.163589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.168907 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-l7vqh" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.175613 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.176342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.178430 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bnlmt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.185877 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.200487 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.205878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zhs\" (UniqueName: \"kubernetes.io/projected/e74c1bee-e2b8-4b35-8ced-7832d9c1a824-kube-api-access-m6zhs\") pod \"barbican-operator-controller-manager-7f86f8796f-kp86b\" (UID: \"e74c1bee-e2b8-4b35-8ced-7832d9c1a824\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.205926 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbg9\" (UniqueName: \"kubernetes.io/projected/edb43be5-768f-4843-be5d-9826aa2e1a11-kube-api-access-7cbg9\") pod \"cinder-operator-controller-manager-7478f7dbf9-ftwtq\" (UID: \"edb43be5-768f-4843-be5d-9826aa2e1a11\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.205960 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg77s\" (UniqueName: \"kubernetes.io/projected/253c59e7-dd33-4606-bd6e-21763472c862-kube-api-access-pg77s\") pod \"designate-operator-controller-manager-b45d7bf98-wwtlj\" (UID: \"253c59e7-dd33-4606-bd6e-21763472c862\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.223121 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.223917 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.226890 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-sxntb" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.237637 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.245738 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zhs\" (UniqueName: \"kubernetes.io/projected/e74c1bee-e2b8-4b35-8ced-7832d9c1a824-kube-api-access-m6zhs\") pod \"barbican-operator-controller-manager-7f86f8796f-kp86b\" (UID: \"e74c1bee-e2b8-4b35-8ced-7832d9c1a824\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.249705 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.250552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.255645 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.255768 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.256429 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.257228 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hrvk8" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.262693 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-bzksj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.265890 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.275517 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.276605 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.282500 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xvs5b" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.282664 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.298141 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.298916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.304688 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.305710 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zck8j" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.306946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84hv\" (UniqueName: \"kubernetes.io/projected/f8378dce-4e90-4373-94ca-bd0420827dea-kube-api-access-s84hv\") pod \"glance-operator-controller-manager-78fdd796fd-qg9zd\" (UID: \"f8378dce-4e90-4373-94ca-bd0420827dea\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.306980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cbg9\" (UniqueName: \"kubernetes.io/projected/edb43be5-768f-4843-be5d-9826aa2e1a11-kube-api-access-7cbg9\") pod \"cinder-operator-controller-manager-7478f7dbf9-ftwtq\" (UID: \"edb43be5-768f-4843-be5d-9826aa2e1a11\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.307018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg77s\" (UniqueName: \"kubernetes.io/projected/253c59e7-dd33-4606-bd6e-21763472c862-kube-api-access-pg77s\") pod \"designate-operator-controller-manager-b45d7bf98-wwtlj\" (UID: \"253c59e7-dd33-4606-bd6e-21763472c862\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.307055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwjr5\" (UniqueName: \"kubernetes.io/projected/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-kube-api-access-rwjr5\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.307085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.307115 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwt85\" (UniqueName: \"kubernetes.io/projected/a9e13c49-33ca-4c87-9ef9-ae446cfb519e-kube-api-access-dwt85\") pod \"ironic-operator-controller-manager-598f7747c9-9wxhs\" (UID: \"a9e13c49-33ca-4c87-9ef9-ae446cfb519e\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.307146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvwh\" (UniqueName: \"kubernetes.io/projected/ada55264-0fda-4f30-92a7-28add3873740-kube-api-access-dvvwh\") pod \"horizon-operator-controller-manager-77d5c5b54f-8tj4r\" (UID: \"ada55264-0fda-4f30-92a7-28add3873740\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.307165 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvbr\" (UniqueName: \"kubernetes.io/projected/aeeb554e-7369-4b95-8583-8e4b083e953c-kube-api-access-vxvbr\") pod \"heat-operator-controller-manager-594c8c9d5d-fl5ld\" (UID: \"aeeb554e-7369-4b95-8583-8e4b083e953c\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.321845 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.322591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.326417 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.327146 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.331321 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.335565 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.342798 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.350210 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.351050 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.359876 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.360044 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mmvrc" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.360249 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bnnc8" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.360283 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tb8g6" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.361209 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.365557 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wcnld" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.376766 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.388064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg77s\" (UniqueName: \"kubernetes.io/projected/253c59e7-dd33-4606-bd6e-21763472c862-kube-api-access-pg77s\") pod \"designate-operator-controller-manager-b45d7bf98-wwtlj\" (UID: \"253c59e7-dd33-4606-bd6e-21763472c862\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408308 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwjr5\" (UniqueName: \"kubernetes.io/projected/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-kube-api-access-rwjr5\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sdg8\" (UniqueName: \"kubernetes.io/projected/7a907425-198b-4c21-b16c-55d94617275f-kube-api-access-9sdg8\") pod \"nova-operator-controller-manager-7bdb645866-tpkqs\" (UID: \"7a907425-198b-4c21-b16c-55d94617275f\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408388 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408407 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnl4\" (UniqueName: \"kubernetes.io/projected/e1a8680b-2bd2-43c7-839c-8b2b899a953b-kube-api-access-qgnl4\") pod \"manila-operator-controller-manager-78c6999f6f-92hpt\" (UID: \"e1a8680b-2bd2-43c7-839c-8b2b899a953b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwt85\" (UniqueName: \"kubernetes.io/projected/a9e13c49-33ca-4c87-9ef9-ae446cfb519e-kube-api-access-dwt85\") pod \"ironic-operator-controller-manager-598f7747c9-9wxhs\" (UID: \"a9e13c49-33ca-4c87-9ef9-ae446cfb519e\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvwh\" (UniqueName: \"kubernetes.io/projected/ada55264-0fda-4f30-92a7-28add3873740-kube-api-access-dvvwh\") pod \"horizon-operator-controller-manager-77d5c5b54f-8tj4r\" (UID: \"ada55264-0fda-4f30-92a7-28add3873740\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408498 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvbr\" (UniqueName: \"kubernetes.io/projected/aeeb554e-7369-4b95-8583-8e4b083e953c-kube-api-access-vxvbr\") pod \"heat-operator-controller-manager-594c8c9d5d-fl5ld\" (UID: \"aeeb554e-7369-4b95-8583-8e4b083e953c\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408518 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr86k\" (UniqueName: \"kubernetes.io/projected/f8d76fd0-1a35-4848-8e19-611f437c0b2e-kube-api-access-gr86k\") pod \"neutron-operator-controller-manager-78d58447c5-r56wb\" (UID: \"f8d76fd0-1a35-4848-8e19-611f437c0b2e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408543 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s84hv\" (UniqueName: \"kubernetes.io/projected/f8378dce-4e90-4373-94ca-bd0420827dea-kube-api-access-s84hv\") pod \"glance-operator-controller-manager-78fdd796fd-qg9zd\" (UID: \"f8378dce-4e90-4373-94ca-bd0420827dea\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xjz\" (UniqueName: \"kubernetes.io/projected/81067808-b0be-41f6-a1f3-462cb917996b-kube-api-access-g6xjz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt\" (UID: \"81067808-b0be-41f6-a1f3-462cb917996b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.408609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcck6\" (UniqueName: \"kubernetes.io/projected/a02f7d35-b75c-44f5-ad70-f08b553de32c-kube-api-access-gcck6\") pod \"keystone-operator-controller-manager-b8b6d4659-k2wt9\" (UID: \"a02f7d35-b75c-44f5-ad70-f08b553de32c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:06 crc kubenswrapper[4833]: E0127 14:29:06.408920 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:06 crc kubenswrapper[4833]: E0127 14:29:06.408961 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert podName:4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:06.908944045 +0000 UTC m=+1048.560268447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert") pod "infra-operator-controller-manager-694cf4f878-j2jq9" (UID: "4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.425676 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.431551 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.439295 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cbg9\" (UniqueName: \"kubernetes.io/projected/edb43be5-768f-4843-be5d-9826aa2e1a11-kube-api-access-7cbg9\") pod \"cinder-operator-controller-manager-7478f7dbf9-ftwtq\" (UID: \"edb43be5-768f-4843-be5d-9826aa2e1a11\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.445379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvbr\" (UniqueName: \"kubernetes.io/projected/aeeb554e-7369-4b95-8583-8e4b083e953c-kube-api-access-vxvbr\") pod \"heat-operator-controller-manager-594c8c9d5d-fl5ld\" (UID: \"aeeb554e-7369-4b95-8583-8e4b083e953c\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.449554 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.450165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s84hv\" (UniqueName: \"kubernetes.io/projected/f8378dce-4e90-4373-94ca-bd0420827dea-kube-api-access-s84hv\") pod \"glance-operator-controller-manager-78fdd796fd-qg9zd\" (UID: \"f8378dce-4e90-4373-94ca-bd0420827dea\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.450396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.457771 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.458496 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwt85\" (UniqueName: \"kubernetes.io/projected/a9e13c49-33ca-4c87-9ef9-ae446cfb519e-kube-api-access-dwt85\") pod \"ironic-operator-controller-manager-598f7747c9-9wxhs\" (UID: \"a9e13c49-33ca-4c87-9ef9-ae446cfb519e\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.458640 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.458777 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lvrpw" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.459345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.459862 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.463275 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-wzmtz" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.470060 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvwh\" (UniqueName: \"kubernetes.io/projected/ada55264-0fda-4f30-92a7-28add3873740-kube-api-access-dvvwh\") pod \"horizon-operator-controller-manager-77d5c5b54f-8tj4r\" (UID: \"ada55264-0fda-4f30-92a7-28add3873740\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.470991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwjr5\" (UniqueName: \"kubernetes.io/projected/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-kube-api-access-rwjr5\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.476711 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.480956 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.518322 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.518374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcck6\" (UniqueName: \"kubernetes.io/projected/a02f7d35-b75c-44f5-ad70-f08b553de32c-kube-api-access-gcck6\") pod \"keystone-operator-controller-manager-b8b6d4659-k2wt9\" (UID: \"a02f7d35-b75c-44f5-ad70-f08b553de32c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.518417 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8xc\" (UniqueName: \"kubernetes.io/projected/dac55dc4-5aae-4281-a74f-10260dd5b1ac-kube-api-access-fc8xc\") pod \"octavia-operator-controller-manager-5f4cd88d46-x5tlj\" (UID: \"dac55dc4-5aae-4281-a74f-10260dd5b1ac\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.544705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sdg8\" (UniqueName: \"kubernetes.io/projected/7a907425-198b-4c21-b16c-55d94617275f-kube-api-access-9sdg8\") pod \"nova-operator-controller-manager-7bdb645866-tpkqs\" (UID: \"7a907425-198b-4c21-b16c-55d94617275f\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.544779 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhg2\" (UniqueName: \"kubernetes.io/projected/a65c2925-9923-40c5-aba0-b9342b6dab40-kube-api-access-czhg2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.544810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.544862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnl4\" (UniqueName: \"kubernetes.io/projected/e1a8680b-2bd2-43c7-839c-8b2b899a953b-kube-api-access-qgnl4\") pod \"manila-operator-controller-manager-78c6999f6f-92hpt\" (UID: \"e1a8680b-2bd2-43c7-839c-8b2b899a953b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.544905 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdhq\" (UniqueName: \"kubernetes.io/projected/fcf3c608-e55e-490d-9a69-0f00d7fef3fd-kube-api-access-pzdhq\") pod \"ovn-operator-controller-manager-6f75f45d54-tbb5q\" (UID: \"fcf3c608-e55e-490d-9a69-0f00d7fef3fd\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.544995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr86k\" (UniqueName: \"kubernetes.io/projected/f8d76fd0-1a35-4848-8e19-611f437c0b2e-kube-api-access-gr86k\") pod \"neutron-operator-controller-manager-78d58447c5-r56wb\" (UID: \"f8d76fd0-1a35-4848-8e19-611f437c0b2e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.545081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xjz\" (UniqueName: \"kubernetes.io/projected/81067808-b0be-41f6-a1f3-462cb917996b-kube-api-access-g6xjz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt\" (UID: \"81067808-b0be-41f6-a1f3-462cb917996b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.550342 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.622203 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.624713 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sdg8\" (UniqueName: \"kubernetes.io/projected/7a907425-198b-4c21-b16c-55d94617275f-kube-api-access-9sdg8\") pod \"nova-operator-controller-manager-7bdb645866-tpkqs\" (UID: \"7a907425-198b-4c21-b16c-55d94617275f\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.638022 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.639184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcck6\" (UniqueName: \"kubernetes.io/projected/a02f7d35-b75c-44f5-ad70-f08b553de32c-kube-api-access-gcck6\") pod \"keystone-operator-controller-manager-b8b6d4659-k2wt9\" (UID: \"a02f7d35-b75c-44f5-ad70-f08b553de32c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.639663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.640138 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-n4kbq" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.643433 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.647130 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnl4\" (UniqueName: \"kubernetes.io/projected/e1a8680b-2bd2-43c7-839c-8b2b899a953b-kube-api-access-qgnl4\") pod \"manila-operator-controller-manager-78c6999f6f-92hpt\" (UID: \"e1a8680b-2bd2-43c7-839c-8b2b899a953b\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.647352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8xc\" (UniqueName: \"kubernetes.io/projected/dac55dc4-5aae-4281-a74f-10260dd5b1ac-kube-api-access-fc8xc\") pod \"octavia-operator-controller-manager-5f4cd88d46-x5tlj\" (UID: \"dac55dc4-5aae-4281-a74f-10260dd5b1ac\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.666659 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.666689 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czhg2\" (UniqueName: \"kubernetes.io/projected/a65c2925-9923-40c5-aba0-b9342b6dab40-kube-api-access-czhg2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.666752 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfvf\" (UniqueName: \"kubernetes.io/projected/c9891074-5241-4f51-8a9d-b28240983c3a-kube-api-access-fxfvf\") pod \"placement-operator-controller-manager-79d5ccc684-pdtk9\" (UID: \"c9891074-5241-4f51-8a9d-b28240983c3a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.666774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdhq\" (UniqueName: \"kubernetes.io/projected/fcf3c608-e55e-490d-9a69-0f00d7fef3fd-kube-api-access-pzdhq\") pod \"ovn-operator-controller-manager-6f75f45d54-tbb5q\" (UID: \"fcf3c608-e55e-490d-9a69-0f00d7fef3fd\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:06 crc kubenswrapper[4833]: E0127 14:29:06.666888 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:06 crc kubenswrapper[4833]: E0127 14:29:06.666955 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert podName:a65c2925-9923-40c5-aba0-b9342b6dab40 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:07.166937052 +0000 UTC m=+1048.818261444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854flztk" (UID: "a65c2925-9923-40c5-aba0-b9342b6dab40") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.670154 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.677606 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr86k\" (UniqueName: \"kubernetes.io/projected/f8d76fd0-1a35-4848-8e19-611f437c0b2e-kube-api-access-gr86k\") pod \"neutron-operator-controller-manager-78d58447c5-r56wb\" (UID: \"f8d76fd0-1a35-4848-8e19-611f437c0b2e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.677883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xjz\" (UniqueName: \"kubernetes.io/projected/81067808-b0be-41f6-a1f3-462cb917996b-kube-api-access-g6xjz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt\" (UID: \"81067808-b0be-41f6-a1f3-462cb917996b\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.685347 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.686225 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.689192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8xc\" (UniqueName: \"kubernetes.io/projected/dac55dc4-5aae-4281-a74f-10260dd5b1ac-kube-api-access-fc8xc\") pod \"octavia-operator-controller-manager-5f4cd88d46-x5tlj\" (UID: \"dac55dc4-5aae-4281-a74f-10260dd5b1ac\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.690390 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bkjq6" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.692640 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.695915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdhq\" (UniqueName: \"kubernetes.io/projected/fcf3c608-e55e-490d-9a69-0f00d7fef3fd-kube-api-access-pzdhq\") pod \"ovn-operator-controller-manager-6f75f45d54-tbb5q\" (UID: \"fcf3c608-e55e-490d-9a69-0f00d7fef3fd\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.701319 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czhg2\" (UniqueName: \"kubernetes.io/projected/a65c2925-9923-40c5-aba0-b9342b6dab40-kube-api-access-czhg2\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.707118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.720074 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.720836 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.724963 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jkcr2" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.731367 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.732052 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.768386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfvf\" (UniqueName: \"kubernetes.io/projected/c9891074-5241-4f51-8a9d-b28240983c3a-kube-api-access-fxfvf\") pod \"placement-operator-controller-manager-79d5ccc684-pdtk9\" (UID: \"c9891074-5241-4f51-8a9d-b28240983c3a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.768476 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6dr\" (UniqueName: \"kubernetes.io/projected/90b9f19a-9a04-408e-ad25-eeccd712b2d3-kube-api-access-9f6dr\") pod \"telemetry-operator-controller-manager-85cd9769bb-zcgrn\" (UID: \"90b9f19a-9a04-408e-ad25-eeccd712b2d3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.768578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjxb\" (UniqueName: \"kubernetes.io/projected/b304e371-f956-4d28-bb4e-3a1a9ae3e860-kube-api-access-bkjxb\") pod \"swift-operator-controller-manager-547cbdb99f-6bmlg\" (UID: \"b304e371-f956-4d28-bb4e-3a1a9ae3e860\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.793457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfvf\" (UniqueName: \"kubernetes.io/projected/c9891074-5241-4f51-8a9d-b28240983c3a-kube-api-access-fxfvf\") pod \"placement-operator-controller-manager-79d5ccc684-pdtk9\" (UID: \"c9891074-5241-4f51-8a9d-b28240983c3a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.804577 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.805579 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.807982 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nhs2r" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.832673 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.835333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.849556 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.850584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.869396 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.870253 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.871954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6dr\" (UniqueName: \"kubernetes.io/projected/90b9f19a-9a04-408e-ad25-eeccd712b2d3-kube-api-access-9f6dr\") pod \"telemetry-operator-controller-manager-85cd9769bb-zcgrn\" (UID: \"90b9f19a-9a04-408e-ad25-eeccd712b2d3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.872020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvbk\" (UniqueName: \"kubernetes.io/projected/df55c968-3acc-4ffc-a674-bda364677610-kube-api-access-mvvbk\") pod \"test-operator-controller-manager-69797bbcbd-b5xnd\" (UID: \"df55c968-3acc-4ffc-a674-bda364677610\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.872067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjxb\" (UniqueName: \"kubernetes.io/projected/b304e371-f956-4d28-bb4e-3a1a9ae3e860-kube-api-access-bkjxb\") pod \"swift-operator-controller-manager-547cbdb99f-6bmlg\" (UID: \"b304e371-f956-4d28-bb4e-3a1a9ae3e860\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.875737 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-t4nbv" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.886393 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.903647 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6dr\" (UniqueName: \"kubernetes.io/projected/90b9f19a-9a04-408e-ad25-eeccd712b2d3-kube-api-access-9f6dr\") pod \"telemetry-operator-controller-manager-85cd9769bb-zcgrn\" (UID: \"90b9f19a-9a04-408e-ad25-eeccd712b2d3\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.923310 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjxb\" (UniqueName: \"kubernetes.io/projected/b304e371-f956-4d28-bb4e-3a1a9ae3e860-kube-api-access-bkjxb\") pod \"swift-operator-controller-manager-547cbdb99f-6bmlg\" (UID: \"b304e371-f956-4d28-bb4e-3a1a9ae3e860\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.924641 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.933698 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.935436 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.937708 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.940314 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.940384 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lvt2j" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.940578 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.943401 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.948379 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.949196 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.953664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-87gph" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.957749 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp"] Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.970796 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.973532 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvvbk\" (UniqueName: \"kubernetes.io/projected/df55c968-3acc-4ffc-a674-bda364677610-kube-api-access-mvvbk\") pod \"test-operator-controller-manager-69797bbcbd-b5xnd\" (UID: \"df55c968-3acc-4ffc-a674-bda364677610\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.973611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.973759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.973789 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4p8g\" (UniqueName: \"kubernetes.io/projected/70855ffd-2b62-4761-a9ac-b944d0e1115a-kube-api-access-g4p8g\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.973849 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:06 crc kubenswrapper[4833]: I0127 14:29:06.974003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbf7q\" (UniqueName: \"kubernetes.io/projected/56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d-kube-api-access-vbf7q\") pod \"watcher-operator-controller-manager-86c96597bf-79g5g\" (UID: \"56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d\") " pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:06 crc kubenswrapper[4833]: E0127 14:29:06.974581 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:06 crc kubenswrapper[4833]: E0127 14:29:06.974672 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert podName:4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:07.974651851 +0000 UTC m=+1049.625976253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert") pod "infra-operator-controller-manager-694cf4f878-j2jq9" (UID: "4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.016696 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvvbk\" (UniqueName: \"kubernetes.io/projected/df55c968-3acc-4ffc-a674-bda364677610-kube-api-access-mvvbk\") pod \"test-operator-controller-manager-69797bbcbd-b5xnd\" (UID: \"df55c968-3acc-4ffc-a674-bda364677610\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.034467 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b"] Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.037969 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.056433 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:07 crc kubenswrapper[4833]: W0127 14:29:07.056546 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode74c1bee_e2b8_4b35_8ced_7832d9c1a824.slice/crio-a450fb8e3e95b8148e0ba9b6e518b71c7506cea9db6a21da690f349fcdb87192 WatchSource:0}: Error finding container a450fb8e3e95b8148e0ba9b6e518b71c7506cea9db6a21da690f349fcdb87192: Status 404 returned error can't find the container with id a450fb8e3e95b8148e0ba9b6e518b71c7506cea9db6a21da690f349fcdb87192 Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.069023 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.076457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkx94\" (UniqueName: \"kubernetes.io/projected/906c1999-d97c-4a87-b1d0-8d06bef0b396-kube-api-access-rkx94\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5zxfp\" (UID: \"906c1999-d97c-4a87-b1d0-8d06bef0b396\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.076539 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.076569 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4p8g\" (UniqueName: \"kubernetes.io/projected/70855ffd-2b62-4761-a9ac-b944d0e1115a-kube-api-access-g4p8g\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.076666 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbf7q\" (UniqueName: \"kubernetes.io/projected/56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d-kube-api-access-vbf7q\") pod \"watcher-operator-controller-manager-86c96597bf-79g5g\" (UID: \"56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d\") " pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.076707 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.076864 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.076924 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:07.5769047 +0000 UTC m=+1049.228229102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "metrics-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.076961 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.076988 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.077014 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:07.577004963 +0000 UTC m=+1049.228329365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.095800 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbf7q\" (UniqueName: \"kubernetes.io/projected/56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d-kube-api-access-vbf7q\") pod \"watcher-operator-controller-manager-86c96597bf-79g5g\" (UID: \"56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d\") " pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.100981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4p8g\" (UniqueName: \"kubernetes.io/projected/70855ffd-2b62-4761-a9ac-b944d0e1115a-kube-api-access-g4p8g\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.142038 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.154826 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd"] Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.181356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkx94\" (UniqueName: \"kubernetes.io/projected/906c1999-d97c-4a87-b1d0-8d06bef0b396-kube-api-access-rkx94\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5zxfp\" (UID: \"906c1999-d97c-4a87-b1d0-8d06bef0b396\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.181402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.181615 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.181667 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert podName:a65c2925-9923-40c5-aba0-b9342b6dab40 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:08.181650962 +0000 UTC m=+1049.832975364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854flztk" (UID: "a65c2925-9923-40c5-aba0-b9342b6dab40") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.208629 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkx94\" (UniqueName: \"kubernetes.io/projected/906c1999-d97c-4a87-b1d0-8d06bef0b396-kube-api-access-rkx94\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5zxfp\" (UID: \"906c1999-d97c-4a87-b1d0-8d06bef0b396\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.323642 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.352605 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.359718 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld"] Jan 27 14:29:07 crc kubenswrapper[4833]: W0127 14:29:07.420134 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeeb554e_7369_4b95_8583_8e4b083e953c.slice/crio-39ee94a8f34b4e26ed373233b592813f0fadc49d9be399fcca823c79c762b7aa WatchSource:0}: Error finding container 39ee94a8f34b4e26ed373233b592813f0fadc49d9be399fcca823c79c762b7aa: Status 404 returned error can't find the container with id 39ee94a8f34b4e26ed373233b592813f0fadc49d9be399fcca823c79c762b7aa Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.630243 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.630369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.630397 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.630493 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:08.630474583 +0000 UTC m=+1050.281799075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "webhook-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.630527 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: E0127 14:29:07.630587 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:08.630567856 +0000 UTC m=+1050.281892358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "metrics-server-cert" not found Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.710568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" event={"ID":"aeeb554e-7369-4b95-8583-8e4b083e953c","Type":"ContainerStarted","Data":"39ee94a8f34b4e26ed373233b592813f0fadc49d9be399fcca823c79c762b7aa"} Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.715094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" event={"ID":"e74c1bee-e2b8-4b35-8ced-7832d9c1a824","Type":"ContainerStarted","Data":"a450fb8e3e95b8148e0ba9b6e518b71c7506cea9db6a21da690f349fcdb87192"} Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.720117 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" event={"ID":"f8378dce-4e90-4373-94ca-bd0420827dea","Type":"ContainerStarted","Data":"5ba654155f82100754f9b85929013830c7b3c09de91ccb38f3d1f60e79fceb3d"} Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.881955 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq"] Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.889344 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs"] Jan 27 14:29:07 crc kubenswrapper[4833]: W0127 14:29:07.894223 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb43be5_768f_4843_be5d_9826aa2e1a11.slice/crio-092c7934e20d0036f9458bd98652800bba4eb5281e30af9a3fff19e228d1d9d3 WatchSource:0}: Error finding container 092c7934e20d0036f9458bd98652800bba4eb5281e30af9a3fff19e228d1d9d3: Status 404 returned error can't find the container with id 092c7934e20d0036f9458bd98652800bba4eb5281e30af9a3fff19e228d1d9d3 Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.898825 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj"] Jan 27 14:29:07 crc kubenswrapper[4833]: I0127 14:29:07.942597 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.035839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.036312 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.036365 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert podName:4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:10.03634039 +0000 UTC m=+1051.687664792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert") pod "infra-operator-controller-manager-694cf4f878-j2jq9" (UID: "4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.161205 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.164208 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs"] Jan 27 14:29:08 crc kubenswrapper[4833]: W0127 14:29:08.176565 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a907425_198b_4c21_b16c_55d94617275f.slice/crio-0fec54924fb3f009a778a18e2dc73b5eafa609bc93a9eb67f0d40d1fe32ce171 WatchSource:0}: Error finding container 0fec54924fb3f009a778a18e2dc73b5eafa609bc93a9eb67f0d40d1fe32ce171: Status 404 returned error can't find the container with id 0fec54924fb3f009a778a18e2dc73b5eafa609bc93a9eb67f0d40d1fe32ce171 Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.176882 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.187039 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.191725 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt"] Jan 27 14:29:08 crc kubenswrapper[4833]: W0127 14:29:08.204750 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada55264_0fda_4f30_92a7_28add3873740.slice/crio-cc1edf127d0cc9d0ae450c94cd4a7e288c81fb0084b160aa04e47b41afa99238 WatchSource:0}: Error finding container cc1edf127d0cc9d0ae450c94cd4a7e288c81fb0084b160aa04e47b41afa99238: Status 404 returned error can't find the container with id cc1edf127d0cc9d0ae450c94cd4a7e288c81fb0084b160aa04e47b41afa99238 Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.210915 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.235163 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.239381 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.239641 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.239724 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert podName:a65c2925-9923-40c5-aba0-b9342b6dab40 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:10.239709439 +0000 UTC m=+1051.891033841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854flztk" (UID: "a65c2925-9923-40c5-aba0-b9342b6dab40") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.239748 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.245063 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt"] Jan 27 14:29:08 crc kubenswrapper[4833]: W0127 14:29:08.248244 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9891074_5241_4f51_8a9d_b28240983c3a.slice/crio-a76f6289bc721b1fffb4d1ed24b4a3349f2ec2f601d0a175a4642583aaa8b87d WatchSource:0}: Error finding container a76f6289bc721b1fffb4d1ed24b4a3349f2ec2f601d0a175a4642583aaa8b87d: Status 404 returned error can't find the container with id a76f6289bc721b1fffb4d1ed24b4a3349f2ec2f601d0a175a4642583aaa8b87d Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.252572 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn"] Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.266636 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd"] Jan 27 14:29:08 crc kubenswrapper[4833]: W0127 14:29:08.277013 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac55dc4_5aae_4281_a74f_10260dd5b1ac.slice/crio-f1a19d9b69d3046e120a0510b0454be09a8f1d0cfc2704ab297cedfea5587915 WatchSource:0}: Error finding container f1a19d9b69d3046e120a0510b0454be09a8f1d0cfc2704ab297cedfea5587915: Status 404 returned error can't find the container with id f1a19d9b69d3046e120a0510b0454be09a8f1d0cfc2704ab297cedfea5587915 Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.279616 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fc8xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-x5tlj_openstack-operators(dac55dc4-5aae-4281-a74f-10260dd5b1ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4833]: W0127 14:29:08.279868 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90b9f19a_9a04_408e_ad25_eeccd712b2d3.slice/crio-a4ea83c877e534ccd940a758a8ae0cfe13ac12aa01e50cd2ed7fedc9781704f1 WatchSource:0}: Error finding container a4ea83c877e534ccd940a758a8ae0cfe13ac12aa01e50cd2ed7fedc9781704f1: Status 404 returned error can't find the container with id a4ea83c877e534ccd940a758a8ae0cfe13ac12aa01e50cd2ed7fedc9781704f1 Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.281545 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" podUID="dac55dc4-5aae-4281-a74f-10260dd5b1ac" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.284784 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9f6dr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-zcgrn_openstack-operators(90b9f19a-9a04-408e-ad25-eeccd712b2d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.285967 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" podUID="90b9f19a-9a04-408e-ad25-eeccd712b2d3" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.287189 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp"] Jan 27 14:29:08 crc kubenswrapper[4833]: W0127 14:29:08.287334 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf55c968_3acc_4ffc_a674_bda364677610.slice/crio-08f14b060b9bda2a4d0d2221b50f5d1e252aa294559f56cc43c0415ff396c71d WatchSource:0}: Error finding container 08f14b060b9bda2a4d0d2221b50f5d1e252aa294559f56cc43c0415ff396c71d: Status 404 returned error can't find the container with id 08f14b060b9bda2a4d0d2221b50f5d1e252aa294559f56cc43c0415ff396c71d Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.300430 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkx94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5zxfp_openstack-operators(906c1999-d97c-4a87-b1d0-8d06bef0b396): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.302096 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" podUID="906c1999-d97c-4a87-b1d0-8d06bef0b396" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.302779 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g6xjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt_openstack-operators(81067808-b0be-41f6-a1f3-462cb917996b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.304526 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" podUID="81067808-b0be-41f6-a1f3-462cb917996b" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.318430 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g"] Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.323831 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.22:5001/openstack-k8s-operators/watcher-operator:fff505f956f14308ed9dc10b024aabee3e262435,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vbf7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-86c96597bf-79g5g_openstack-operators(56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.325113 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" podUID="56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.651378 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.651545 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.651673 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.651735 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:10.651703579 +0000 UTC m=+1052.303027981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "metrics-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.652026 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.652057 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:10.652050767 +0000 UTC m=+1052.303375169 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "webhook-server-cert" not found Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.734768 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" event={"ID":"56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d","Type":"ContainerStarted","Data":"3ad6397fbdeb4899d8ea97167f82f8921acdd9870728c4fc63ecd47c99185373"} Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.736951 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/openstack-k8s-operators/watcher-operator:fff505f956f14308ed9dc10b024aabee3e262435\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" podUID="56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d" Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.739913 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" podUID="906c1999-d97c-4a87-b1d0-8d06bef0b396" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.743715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" event={"ID":"906c1999-d97c-4a87-b1d0-8d06bef0b396","Type":"ContainerStarted","Data":"81b595439274c150825a97cab365a1e169245a409f772be48b75e3dbf43405c9"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.743976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" event={"ID":"f8d76fd0-1a35-4848-8e19-611f437c0b2e","Type":"ContainerStarted","Data":"ac3a971c317025c85af71261b77f3caef19f2c8aa1fa8e5b6593b1b028e60bd9"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.745167 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" event={"ID":"c9891074-5241-4f51-8a9d-b28240983c3a","Type":"ContainerStarted","Data":"a76f6289bc721b1fffb4d1ed24b4a3349f2ec2f601d0a175a4642583aaa8b87d"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.753298 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" event={"ID":"81067808-b0be-41f6-a1f3-462cb917996b","Type":"ContainerStarted","Data":"74b44264ab5951b2ce82e7bcebc9dbce813a6704f677fa50a026964f35cdfae1"} Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.754727 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" podUID="81067808-b0be-41f6-a1f3-462cb917996b" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.755936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" event={"ID":"dac55dc4-5aae-4281-a74f-10260dd5b1ac","Type":"ContainerStarted","Data":"f1a19d9b69d3046e120a0510b0454be09a8f1d0cfc2704ab297cedfea5587915"} Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.757225 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" podUID="dac55dc4-5aae-4281-a74f-10260dd5b1ac" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.761155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" event={"ID":"253c59e7-dd33-4606-bd6e-21763472c862","Type":"ContainerStarted","Data":"bac370eb2f8cab31d2c150f77b04620b9dd2188db69c11e19e423924569ac23c"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.783586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" event={"ID":"a9e13c49-33ca-4c87-9ef9-ae446cfb519e","Type":"ContainerStarted","Data":"9f9f55af003a93eec7af8fbb29fbf9a8323ddedb5b34245bf76e011169c90c2e"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.788636 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" event={"ID":"b304e371-f956-4d28-bb4e-3a1a9ae3e860","Type":"ContainerStarted","Data":"7df4a7bc7167d2c6c1f6c97ed0fc79a61b1008da93a2b465ef96c0c4e90bc4f5"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.794637 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" event={"ID":"7a907425-198b-4c21-b16c-55d94617275f","Type":"ContainerStarted","Data":"0fec54924fb3f009a778a18e2dc73b5eafa609bc93a9eb67f0d40d1fe32ce171"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.796262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" event={"ID":"e1a8680b-2bd2-43c7-839c-8b2b899a953b","Type":"ContainerStarted","Data":"c077b8b280fccacf93710831039dae3340b937737648410851fb3a6a5eb5f754"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.797609 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" event={"ID":"a02f7d35-b75c-44f5-ad70-f08b553de32c","Type":"ContainerStarted","Data":"f4a8580f2519242a1cbb5bef8f88b5d2b8e1dfc9183c33dca4930b9c3626c4c1"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.799103 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" event={"ID":"edb43be5-768f-4843-be5d-9826aa2e1a11","Type":"ContainerStarted","Data":"092c7934e20d0036f9458bd98652800bba4eb5281e30af9a3fff19e228d1d9d3"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.802855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" event={"ID":"fcf3c608-e55e-490d-9a69-0f00d7fef3fd","Type":"ContainerStarted","Data":"aea86f8bcf6f57eaf8cdeaaa12ef0b50e68d0c938c2271e416eb2356720a44a7"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.804368 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" event={"ID":"df55c968-3acc-4ffc-a674-bda364677610","Type":"ContainerStarted","Data":"08f14b060b9bda2a4d0d2221b50f5d1e252aa294559f56cc43c0415ff396c71d"} Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.806644 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" event={"ID":"90b9f19a-9a04-408e-ad25-eeccd712b2d3","Type":"ContainerStarted","Data":"a4ea83c877e534ccd940a758a8ae0cfe13ac12aa01e50cd2ed7fedc9781704f1"} Jan 27 14:29:08 crc kubenswrapper[4833]: E0127 14:29:08.808050 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" podUID="90b9f19a-9a04-408e-ad25-eeccd712b2d3" Jan 27 14:29:08 crc kubenswrapper[4833]: I0127 14:29:08.809520 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" event={"ID":"ada55264-0fda-4f30-92a7-28add3873740","Type":"ContainerStarted","Data":"cc1edf127d0cc9d0ae450c94cd4a7e288c81fb0084b160aa04e47b41afa99238"} Jan 27 14:29:09 crc kubenswrapper[4833]: E0127 14:29:09.831990 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" podUID="81067808-b0be-41f6-a1f3-462cb917996b" Jan 27 14:29:09 crc kubenswrapper[4833]: E0127 14:29:09.843516 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" podUID="90b9f19a-9a04-408e-ad25-eeccd712b2d3" Jan 27 14:29:09 crc kubenswrapper[4833]: E0127 14:29:09.843531 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" podUID="906c1999-d97c-4a87-b1d0-8d06bef0b396" Jan 27 14:29:09 crc kubenswrapper[4833]: E0127 14:29:09.843536 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/openstack-k8s-operators/watcher-operator:fff505f956f14308ed9dc10b024aabee3e262435\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" podUID="56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d" Jan 27 14:29:09 crc kubenswrapper[4833]: E0127 14:29:09.845431 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" podUID="dac55dc4-5aae-4281-a74f-10260dd5b1ac" Jan 27 14:29:10 crc kubenswrapper[4833]: I0127 14:29:10.096470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.096659 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.096703 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert podName:4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:14.096689218 +0000 UTC m=+1055.748013620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert") pod "infra-operator-controller-manager-694cf4f878-j2jq9" (UID: "4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: I0127 14:29:10.298950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.299144 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.299227 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert podName:a65c2925-9923-40c5-aba0-b9342b6dab40 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:14.299200567 +0000 UTC m=+1055.950524969 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854flztk" (UID: "a65c2925-9923-40c5-aba0-b9342b6dab40") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: I0127 14:29:10.705157 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:10 crc kubenswrapper[4833]: I0127 14:29:10.705271 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.705286 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.705363 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:14.70534077 +0000 UTC m=+1056.356665212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "webhook-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.705411 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:29:10 crc kubenswrapper[4833]: E0127 14:29:10.705494 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:14.705477383 +0000 UTC m=+1056.356801785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "metrics-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: I0127 14:29:14.131494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.131674 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.132138 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert podName:4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:22.132118822 +0000 UTC m=+1063.783443214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert") pod "infra-operator-controller-manager-694cf4f878-j2jq9" (UID: "4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: I0127 14:29:14.333741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.333944 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.334041 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert podName:a65c2925-9923-40c5-aba0-b9342b6dab40 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:22.334019325 +0000 UTC m=+1063.985343737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854flztk" (UID: "a65c2925-9923-40c5-aba0-b9342b6dab40") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: I0127 14:29:14.738925 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:14 crc kubenswrapper[4833]: I0127 14:29:14.739058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.739139 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.739232 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:22.739209885 +0000 UTC m=+1064.390534367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "metrics-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.739226 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:29:14 crc kubenswrapper[4833]: E0127 14:29:14.739316 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:22.739293577 +0000 UTC m=+1064.390617979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "webhook-server-cert" not found Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.073903 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.074507 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvvbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-b5xnd_openstack-operators(df55c968-3acc-4ffc-a674-bda364677610): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.075672 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" podUID="df55c968-3acc-4ffc-a674-bda364677610" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.833962 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.834182 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gr86k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-r56wb_openstack-operators(f8d76fd0-1a35-4848-8e19-611f437c0b2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.835371 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" podUID="f8d76fd0-1a35-4848-8e19-611f437c0b2e" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.919026 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" podUID="f8d76fd0-1a35-4848-8e19-611f437c0b2e" Jan 27 14:29:21 crc kubenswrapper[4833]: E0127 14:29:21.919568 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" podUID="df55c968-3acc-4ffc-a674-bda364677610" Jan 27 14:29:22 crc kubenswrapper[4833]: I0127 14:29:22.160923 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.161073 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.161159 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert podName:4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:38.161141344 +0000 UTC m=+1079.812465746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert") pod "infra-operator-controller-manager-694cf4f878-j2jq9" (UID: "4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:29:22 crc kubenswrapper[4833]: I0127 14:29:22.363580 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.363799 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.363882 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert podName:a65c2925-9923-40c5-aba0-b9342b6dab40 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:38.363861466 +0000 UTC m=+1080.015185958 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854flztk" (UID: "a65c2925-9923-40c5-aba0-b9342b6dab40") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.467171 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.467412 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pzdhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-tbb5q_openstack-operators(fcf3c608-e55e-490d-9a69-0f00d7fef3fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.468625 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" podUID="fcf3c608-e55e-490d-9a69-0f00d7fef3fd" Jan 27 14:29:22 crc kubenswrapper[4833]: I0127 14:29:22.768279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:22 crc kubenswrapper[4833]: I0127 14:29:22.768414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.768573 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.768623 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs podName:70855ffd-2b62-4761-a9ac-b944d0e1115a nodeName:}" failed. No retries permitted until 2026-01-27 14:29:38.768609385 +0000 UTC m=+1080.419933787 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs") pod "openstack-operator-controller-manager-6f7cb759dd-k25f5" (UID: "70855ffd-2b62-4761-a9ac-b944d0e1115a") : secret "metrics-server-cert" not found Jan 27 14:29:22 crc kubenswrapper[4833]: I0127 14:29:22.785642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-webhook-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:22 crc kubenswrapper[4833]: E0127 14:29:22.927992 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" podUID="fcf3c608-e55e-490d-9a69-0f00d7fef3fd" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.073912 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.074107 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bkjxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-6bmlg_openstack-operators(b304e371-f956-4d28-bb4e-3a1a9ae3e860): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.075308 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" podUID="b304e371-f956-4d28-bb4e-3a1a9ae3e860" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.645145 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.645662 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvvwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-8tj4r_openstack-operators(ada55264-0fda-4f30-92a7-28add3873740): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.646825 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" podUID="ada55264-0fda-4f30-92a7-28add3873740" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.938140 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" podUID="b304e371-f956-4d28-bb4e-3a1a9ae3e860" Jan 27 14:29:23 crc kubenswrapper[4833]: E0127 14:29:23.941062 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" podUID="ada55264-0fda-4f30-92a7-28add3873740" Jan 27 14:29:24 crc kubenswrapper[4833]: E0127 14:29:24.299971 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 27 14:29:24 crc kubenswrapper[4833]: E0127 14:29:24.300245 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgnl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-92hpt_openstack-operators(e1a8680b-2bd2-43c7-839c-8b2b899a953b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:24 crc kubenswrapper[4833]: E0127 14:29:24.301533 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" podUID="e1a8680b-2bd2-43c7-839c-8b2b899a953b" Jan 27 14:29:24 crc kubenswrapper[4833]: E0127 14:29:24.944797 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" podUID="e1a8680b-2bd2-43c7-839c-8b2b899a953b" Jan 27 14:29:25 crc kubenswrapper[4833]: E0127 14:29:25.072369 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658" Jan 27 14:29:25 crc kubenswrapper[4833]: E0127 14:29:25.072892 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sdg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-tpkqs_openstack-operators(7a907425-198b-4c21-b16c-55d94617275f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:25 crc kubenswrapper[4833]: E0127 14:29:25.074877 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" podUID="7a907425-198b-4c21-b16c-55d94617275f" Jan 27 14:29:25 crc kubenswrapper[4833]: E0127 14:29:25.948734 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" podUID="7a907425-198b-4c21-b16c-55d94617275f" Jan 27 14:29:26 crc kubenswrapper[4833]: E0127 14:29:26.802291 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 14:29:26 crc kubenswrapper[4833]: E0127 14:29:26.802530 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcck6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-k2wt9_openstack-operators(a02f7d35-b75c-44f5-ad70-f08b553de32c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:29:26 crc kubenswrapper[4833]: E0127 14:29:26.804609 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" podUID="a02f7d35-b75c-44f5-ad70-f08b553de32c" Jan 27 14:29:26 crc kubenswrapper[4833]: E0127 14:29:26.958927 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" podUID="a02f7d35-b75c-44f5-ad70-f08b553de32c" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.987920 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" event={"ID":"253c59e7-dd33-4606-bd6e-21763472c862","Type":"ContainerStarted","Data":"b797dbc0ce265f8b0a478d7490aa04abb489b355d5422a7ae29a800a460eeefe"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.988494 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.989429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" event={"ID":"aeeb554e-7369-4b95-8583-8e4b083e953c","Type":"ContainerStarted","Data":"6ce1e3f25e1220ed39928cfaef2ed858c7a4ac519c1c84a26f30b630a4be6bdd"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.990047 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.991501 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" event={"ID":"90b9f19a-9a04-408e-ad25-eeccd712b2d3","Type":"ContainerStarted","Data":"4ae04fb99b69c612c5f0756ccac3619c82399d272437c74cdec93f3f97900351"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.991852 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.992976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" event={"ID":"906c1999-d97c-4a87-b1d0-8d06bef0b396","Type":"ContainerStarted","Data":"838e8a40308c9602ab48920dbdb263e648d82f99983bc1231861120f54dc913f"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.994472 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" event={"ID":"dac55dc4-5aae-4281-a74f-10260dd5b1ac","Type":"ContainerStarted","Data":"b825fcd02ab99498327e13356df527938138849ce513db0d65dac845fecf6c14"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.994789 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.996826 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" event={"ID":"e74c1bee-e2b8-4b35-8ced-7832d9c1a824","Type":"ContainerStarted","Data":"ed598784f13d5be31df5839a670069a7cff9ee791bce84090b56b4e3c5bf87cc"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.997154 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.998102 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" event={"ID":"a9e13c49-33ca-4c87-9ef9-ae446cfb519e","Type":"ContainerStarted","Data":"585701709f419b1bff0e3396e3e71aa6b04aee1b1b154471cff025c5260ecda9"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.998408 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.999394 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" event={"ID":"edb43be5-768f-4843-be5d-9826aa2e1a11","Type":"ContainerStarted","Data":"63d3bbd723f637241fb947902d6814cc8a5c1f1625d743a9d2a20f1e6f12ae0f"} Jan 27 14:29:30 crc kubenswrapper[4833]: I0127 14:29:30.999765 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.001187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" event={"ID":"f8378dce-4e90-4373-94ca-bd0420827dea","Type":"ContainerStarted","Data":"a5f46ed8e3c5f2a4ddd4bffc99cbfd5b818fb389044244416fac8f2232e89558"} Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.001746 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.008679 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" event={"ID":"56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d","Type":"ContainerStarted","Data":"e1fab8f6c6cd610d3fdaf26888b3b49318cf5b525d91925fa3a5674d8daa77f8"} Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.008954 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.009366 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" podStartSLOduration=4.888221916 podStartE2EDuration="25.009352924s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.931680871 +0000 UTC m=+1049.583005273" lastFinishedPulling="2026-01-27 14:29:28.052811879 +0000 UTC m=+1069.704136281" observedRunningTime="2026-01-27 14:29:31.002488912 +0000 UTC m=+1072.653813314" watchObservedRunningTime="2026-01-27 14:29:31.009352924 +0000 UTC m=+1072.660677326" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.010574 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" event={"ID":"c9891074-5241-4f51-8a9d-b28240983c3a","Type":"ContainerStarted","Data":"9135ebdfadcaf80c1bfa73a5538c162e0b3891e42fb58a12429fceb5ffc61179"} Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.011110 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.015960 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" event={"ID":"81067808-b0be-41f6-a1f3-462cb917996b","Type":"ContainerStarted","Data":"59ff06b56b418cffcdbe555e66e6fe4bab67f04a3e0bb23439732a64b3e82e24"} Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.016180 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.033866 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" podStartSLOduration=4.16826552 podStartE2EDuration="25.033844907s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.187915009 +0000 UTC m=+1048.839239411" lastFinishedPulling="2026-01-27 14:29:28.053494396 +0000 UTC m=+1069.704818798" observedRunningTime="2026-01-27 14:29:31.029513959 +0000 UTC m=+1072.680838361" watchObservedRunningTime="2026-01-27 14:29:31.033844907 +0000 UTC m=+1072.685169309" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.057217 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" podStartSLOduration=6.741132734 podStartE2EDuration="25.057191751s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.903237919 +0000 UTC m=+1049.554562321" lastFinishedPulling="2026-01-27 14:29:26.219296926 +0000 UTC m=+1067.870621338" observedRunningTime="2026-01-27 14:29:31.052688188 +0000 UTC m=+1072.704012590" watchObservedRunningTime="2026-01-27 14:29:31.057191751 +0000 UTC m=+1072.708516153" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.079191 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" podStartSLOduration=5.936052778 podStartE2EDuration="25.079174751s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.076564992 +0000 UTC m=+1048.727889394" lastFinishedPulling="2026-01-27 14:29:26.219686965 +0000 UTC m=+1067.871011367" observedRunningTime="2026-01-27 14:29:31.074541616 +0000 UTC m=+1072.725866008" watchObservedRunningTime="2026-01-27 14:29:31.079174751 +0000 UTC m=+1072.730499153" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.084350 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" podStartSLOduration=3.537081686 podStartE2EDuration="25.084334041s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.284645474 +0000 UTC m=+1049.935969876" lastFinishedPulling="2026-01-27 14:29:29.831897829 +0000 UTC m=+1071.483222231" observedRunningTime="2026-01-27 14:29:31.08429094 +0000 UTC m=+1072.735615332" watchObservedRunningTime="2026-01-27 14:29:31.084334041 +0000 UTC m=+1072.735658443" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.100535 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5zxfp" podStartSLOduration=3.509627777 podStartE2EDuration="25.100514025s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.300310365 +0000 UTC m=+1049.951634767" lastFinishedPulling="2026-01-27 14:29:29.891196613 +0000 UTC m=+1071.542521015" observedRunningTime="2026-01-27 14:29:31.094890584 +0000 UTC m=+1072.746214986" watchObservedRunningTime="2026-01-27 14:29:31.100514025 +0000 UTC m=+1072.751838427" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.108571 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" podStartSLOduration=4.483633681 podStartE2EDuration="25.108550116s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.427947475 +0000 UTC m=+1049.079271877" lastFinishedPulling="2026-01-27 14:29:28.05286391 +0000 UTC m=+1069.704188312" observedRunningTime="2026-01-27 14:29:31.107949552 +0000 UTC m=+1072.759273954" watchObservedRunningTime="2026-01-27 14:29:31.108550116 +0000 UTC m=+1072.759874538" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.125945 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" podStartSLOduration=3.488394486 podStartE2EDuration="25.125929381s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.279477914 +0000 UTC m=+1049.930802316" lastFinishedPulling="2026-01-27 14:29:29.917012819 +0000 UTC m=+1071.568337211" observedRunningTime="2026-01-27 14:29:31.120819974 +0000 UTC m=+1072.772144376" watchObservedRunningTime="2026-01-27 14:29:31.125929381 +0000 UTC m=+1072.777253783" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.158275 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" podStartSLOduration=5.041041681 podStartE2EDuration="25.158252571s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.935688201 +0000 UTC m=+1049.587012603" lastFinishedPulling="2026-01-27 14:29:28.052899091 +0000 UTC m=+1069.704223493" observedRunningTime="2026-01-27 14:29:31.13947879 +0000 UTC m=+1072.790803192" watchObservedRunningTime="2026-01-27 14:29:31.158252571 +0000 UTC m=+1072.809576973" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.183868 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" podStartSLOduration=3.516467699 podStartE2EDuration="25.183852161s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.322596413 +0000 UTC m=+1049.973920815" lastFinishedPulling="2026-01-27 14:29:29.989980875 +0000 UTC m=+1071.641305277" observedRunningTime="2026-01-27 14:29:31.182510217 +0000 UTC m=+1072.833834619" watchObservedRunningTime="2026-01-27 14:29:31.183852161 +0000 UTC m=+1072.835176563" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.240460 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" podStartSLOduration=5.452795183 podStartE2EDuration="25.240432746s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.265206057 +0000 UTC m=+1049.916530459" lastFinishedPulling="2026-01-27 14:29:28.05284363 +0000 UTC m=+1069.704168022" observedRunningTime="2026-01-27 14:29:31.205992835 +0000 UTC m=+1072.857317247" watchObservedRunningTime="2026-01-27 14:29:31.240432746 +0000 UTC m=+1072.891757148" Jan 27 14:29:31 crc kubenswrapper[4833]: I0127 14:29:31.242113 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" podStartSLOduration=3.62879947 podStartE2EDuration="25.242107658s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.302651214 +0000 UTC m=+1049.953975616" lastFinishedPulling="2026-01-27 14:29:29.915959402 +0000 UTC m=+1071.567283804" observedRunningTime="2026-01-27 14:29:31.236518868 +0000 UTC m=+1072.887843270" watchObservedRunningTime="2026-01-27 14:29:31.242107658 +0000 UTC m=+1072.893432060" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.057048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" event={"ID":"f8d76fd0-1a35-4848-8e19-611f437c0b2e","Type":"ContainerStarted","Data":"b334b49ecfd644084efc8d5d3db1f3d9e28e8f765d8568ac6b9234a089a1e3f4"} Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.057813 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.058317 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" event={"ID":"fcf3c608-e55e-490d-9a69-0f00d7fef3fd","Type":"ContainerStarted","Data":"5844f7855d0726cfeedda4cf8b91f330eb539dd9e8002ef49156092ee110465e"} Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.058505 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.069593 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" podStartSLOduration=2.346634244 podStartE2EDuration="30.069574352s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.17422045 +0000 UTC m=+1049.825544852" lastFinishedPulling="2026-01-27 14:29:35.897160558 +0000 UTC m=+1077.548484960" observedRunningTime="2026-01-27 14:29:36.068884136 +0000 UTC m=+1077.720208568" watchObservedRunningTime="2026-01-27 14:29:36.069574352 +0000 UTC m=+1077.720898764" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.084157 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" podStartSLOduration=2.391334973 podStartE2EDuration="30.084133567s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:07.95082731 +0000 UTC m=+1049.602151712" lastFinishedPulling="2026-01-27 14:29:35.643625904 +0000 UTC m=+1077.294950306" observedRunningTime="2026-01-27 14:29:36.084128357 +0000 UTC m=+1077.735452769" watchObservedRunningTime="2026-01-27 14:29:36.084133567 +0000 UTC m=+1077.735457989" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.379566 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-kp86b" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.464030 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wwtlj" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.484457 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-qg9zd" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.522365 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fl5ld" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.642211 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-9wxhs" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.735042 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ftwtq" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.853208 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-x5tlj" Jan 27 14:29:36 crc kubenswrapper[4833]: I0127 14:29:36.974878 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt" Jan 27 14:29:37 crc kubenswrapper[4833]: I0127 14:29:37.040180 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-pdtk9" Jan 27 14:29:37 crc kubenswrapper[4833]: I0127 14:29:37.068001 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" event={"ID":"df55c968-3acc-4ffc-a674-bda364677610","Type":"ContainerStarted","Data":"f787ee54b2a83eb5d942ff545928b78c54a01eb3fdd4a3eef13ecdd8febba949"} Jan 27 14:29:37 crc kubenswrapper[4833]: I0127 14:29:37.068497 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:37 crc kubenswrapper[4833]: I0127 14:29:37.073561 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-zcgrn" Jan 27 14:29:37 crc kubenswrapper[4833]: I0127 14:29:37.100970 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" podStartSLOduration=2.779756914 podStartE2EDuration="31.100947872s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.309291311 +0000 UTC m=+1049.960615713" lastFinishedPulling="2026-01-27 14:29:36.630482269 +0000 UTC m=+1078.281806671" observedRunningTime="2026-01-27 14:29:37.083885676 +0000 UTC m=+1078.735210088" watchObservedRunningTime="2026-01-27 14:29:37.100947872 +0000 UTC m=+1078.752272274" Jan 27 14:29:37 crc kubenswrapper[4833]: I0127 14:29:37.328399 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-86c96597bf-79g5g" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.224513 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.229794 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0-cert\") pod \"infra-operator-controller-manager-694cf4f878-j2jq9\" (UID: \"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.398198 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.427987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.436207 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a65c2925-9923-40c5-aba0-b9342b6dab40-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854flztk\" (UID: \"a65c2925-9923-40c5-aba0-b9342b6dab40\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.438154 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.833599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.838613 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70855ffd-2b62-4761-a9ac-b944d0e1115a-metrics-certs\") pod \"openstack-operator-controller-manager-6f7cb759dd-k25f5\" (UID: \"70855ffd-2b62-4761-a9ac-b944d0e1115a\") " pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.899227 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9"] Jan 27 14:29:38 crc kubenswrapper[4833]: W0127 14:29:38.899854 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6e97a8_b1f3_4c3f_a9fb_7dc03163a2b0.slice/crio-5732bf8418396e185cf303ee4b42ea1d14ef3605eb3a74de7679cda71ef97c03 WatchSource:0}: Error finding container 5732bf8418396e185cf303ee4b42ea1d14ef3605eb3a74de7679cda71ef97c03: Status 404 returned error can't find the container with id 5732bf8418396e185cf303ee4b42ea1d14ef3605eb3a74de7679cda71ef97c03 Jan 27 14:29:38 crc kubenswrapper[4833]: I0127 14:29:38.991037 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk"] Jan 27 14:29:38 crc kubenswrapper[4833]: W0127 14:29:38.993662 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65c2925_9923_40c5_aba0_b9342b6dab40.slice/crio-fd8f6a060701a611af2e7beaf4d1c79f008aff372bb9a39237aad44c2195059b WatchSource:0}: Error finding container fd8f6a060701a611af2e7beaf4d1c79f008aff372bb9a39237aad44c2195059b: Status 404 returned error can't find the container with id fd8f6a060701a611af2e7beaf4d1c79f008aff372bb9a39237aad44c2195059b Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.091051 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" event={"ID":"e1a8680b-2bd2-43c7-839c-8b2b899a953b","Type":"ContainerStarted","Data":"c1993d475ae3cdb6c38474068cac064d52e785a842a77872fb438f3252b3cbc7"} Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.091257 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.092650 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" event={"ID":"b304e371-f956-4d28-bb4e-3a1a9ae3e860","Type":"ContainerStarted","Data":"168b38cb58bbd6bcc3a812afe06c1cbac77de7a171af2e0b9c9527400d059d3f"} Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.092789 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.093509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" event={"ID":"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0","Type":"ContainerStarted","Data":"5732bf8418396e185cf303ee4b42ea1d14ef3605eb3a74de7679cda71ef97c03"} Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.094627 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" event={"ID":"a65c2925-9923-40c5-aba0-b9342b6dab40","Type":"ContainerStarted","Data":"fd8f6a060701a611af2e7beaf4d1c79f008aff372bb9a39237aad44c2195059b"} Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.095781 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" event={"ID":"ada55264-0fda-4f30-92a7-28add3873740","Type":"ContainerStarted","Data":"f97a0ba627c3d7e7d76c4f77e1803a2b786fa3b31615f4acec8c7fb4bad2dc63"} Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.095988 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.112088 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" podStartSLOduration=3.030753795 podStartE2EDuration="33.112069159s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.234071508 +0000 UTC m=+1049.885395910" lastFinishedPulling="2026-01-27 14:29:38.315386822 +0000 UTC m=+1079.966711274" observedRunningTime="2026-01-27 14:29:39.109198017 +0000 UTC m=+1080.760522419" watchObservedRunningTime="2026-01-27 14:29:39.112069159 +0000 UTC m=+1080.763393551" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.128688 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" podStartSLOduration=2.928556436 podStartE2EDuration="33.128673745s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.2125998 +0000 UTC m=+1049.863924202" lastFinishedPulling="2026-01-27 14:29:38.412717109 +0000 UTC m=+1080.064041511" observedRunningTime="2026-01-27 14:29:39.127078175 +0000 UTC m=+1080.778402577" watchObservedRunningTime="2026-01-27 14:29:39.128673745 +0000 UTC m=+1080.779998137" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.138596 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.143808 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" podStartSLOduration=3.095708669 podStartE2EDuration="33.143789653s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.269497684 +0000 UTC m=+1049.920822076" lastFinishedPulling="2026-01-27 14:29:38.317578658 +0000 UTC m=+1079.968903060" observedRunningTime="2026-01-27 14:29:39.139088436 +0000 UTC m=+1080.790412858" watchObservedRunningTime="2026-01-27 14:29:39.143789653 +0000 UTC m=+1080.795114055" Jan 27 14:29:39 crc kubenswrapper[4833]: I0127 14:29:39.590113 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5"] Jan 27 14:29:39 crc kubenswrapper[4833]: W0127 14:29:39.596956 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70855ffd_2b62_4761_a9ac_b944d0e1115a.slice/crio-62c21abd279d168c25265f65aa292184bf582f93acc60d93612cd67210495b82 WatchSource:0}: Error finding container 62c21abd279d168c25265f65aa292184bf582f93acc60d93612cd67210495b82: Status 404 returned error can't find the container with id 62c21abd279d168c25265f65aa292184bf582f93acc60d93612cd67210495b82 Jan 27 14:29:40 crc kubenswrapper[4833]: I0127 14:29:40.106813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" event={"ID":"70855ffd-2b62-4761-a9ac-b944d0e1115a","Type":"ContainerStarted","Data":"ac355c89f56cafc2cbbf701999ae93b2745d5fccb7a2ff1f97f079cb206e40b7"} Jan 27 14:29:40 crc kubenswrapper[4833]: I0127 14:29:40.106855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" event={"ID":"70855ffd-2b62-4761-a9ac-b944d0e1115a","Type":"ContainerStarted","Data":"62c21abd279d168c25265f65aa292184bf582f93acc60d93612cd67210495b82"} Jan 27 14:29:40 crc kubenswrapper[4833]: I0127 14:29:40.107319 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:29:40 crc kubenswrapper[4833]: I0127 14:29:40.150712 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" podStartSLOduration=34.15068637 podStartE2EDuration="34.15068637s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:29:40.137286565 +0000 UTC m=+1081.788610967" watchObservedRunningTime="2026-01-27 14:29:40.15068637 +0000 UTC m=+1081.802010782" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.120468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" event={"ID":"a02f7d35-b75c-44f5-ad70-f08b553de32c","Type":"ContainerStarted","Data":"316429279fb7486fa0419bd2ae9f25aa0474650e9160367dbb40b6ca85deba1b"} Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.121311 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.122117 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" event={"ID":"4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0","Type":"ContainerStarted","Data":"2a4ae760d48b68977c681cababbb33d85dd2ddd0bd1d319f601cd4b5accf1e29"} Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.122293 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.124098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" event={"ID":"a65c2925-9923-40c5-aba0-b9342b6dab40","Type":"ContainerStarted","Data":"b0348667b7c7741821af1a499dd106b848470fdf80cf39ba2e6f5be4f96f865d"} Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.124520 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.125685 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" event={"ID":"7a907425-198b-4c21-b16c-55d94617275f","Type":"ContainerStarted","Data":"1f9f47adfa8ce7913513ebe795729fc2d09d278f279e7c5eed2a7cd76e27137c"} Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.125921 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.143584 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" podStartSLOduration=2.654755036 podStartE2EDuration="36.143558s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.249973946 +0000 UTC m=+1049.901298348" lastFinishedPulling="2026-01-27 14:29:41.73877691 +0000 UTC m=+1083.390101312" observedRunningTime="2026-01-27 14:29:42.137760155 +0000 UTC m=+1083.789084597" watchObservedRunningTime="2026-01-27 14:29:42.143558 +0000 UTC m=+1083.794882412" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.167981 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" podStartSLOduration=33.636021851 podStartE2EDuration="36.167960791s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:38.901317965 +0000 UTC m=+1080.552642367" lastFinishedPulling="2026-01-27 14:29:41.433256875 +0000 UTC m=+1083.084581307" observedRunningTime="2026-01-27 14:29:42.163280103 +0000 UTC m=+1083.814604535" watchObservedRunningTime="2026-01-27 14:29:42.167960791 +0000 UTC m=+1083.819285193" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.182064 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" podStartSLOduration=2.937084429 podStartE2EDuration="36.182047583s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:08.185548533 +0000 UTC m=+1049.836872935" lastFinishedPulling="2026-01-27 14:29:41.430511677 +0000 UTC m=+1083.081836089" observedRunningTime="2026-01-27 14:29:42.175930651 +0000 UTC m=+1083.827255073" watchObservedRunningTime="2026-01-27 14:29:42.182047583 +0000 UTC m=+1083.833371985" Jan 27 14:29:42 crc kubenswrapper[4833]: I0127 14:29:42.210430 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" podStartSLOduration=33.767530332 podStartE2EDuration="36.210407363s" podCreationTimestamp="2026-01-27 14:29:06 +0000 UTC" firstStartedPulling="2026-01-27 14:29:38.995848911 +0000 UTC m=+1080.647173313" lastFinishedPulling="2026-01-27 14:29:41.438725932 +0000 UTC m=+1083.090050344" observedRunningTime="2026-01-27 14:29:42.208572837 +0000 UTC m=+1083.859897249" watchObservedRunningTime="2026-01-27 14:29:42.210407363 +0000 UTC m=+1083.861731795" Jan 27 14:29:46 crc kubenswrapper[4833]: I0127 14:29:46.625330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8tj4r" Jan 27 14:29:46 crc kubenswrapper[4833]: I0127 14:29:46.711731 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-tbb5q" Jan 27 14:29:46 crc kubenswrapper[4833]: I0127 14:29:46.838021 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-r56wb" Jan 27 14:29:46 crc kubenswrapper[4833]: I0127 14:29:46.839408 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tpkqs" Jan 27 14:29:46 crc kubenswrapper[4833]: I0127 14:29:46.927773 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-k2wt9" Jan 27 14:29:46 crc kubenswrapper[4833]: I0127 14:29:46.941707 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-92hpt" Jan 27 14:29:47 crc kubenswrapper[4833]: I0127 14:29:47.060090 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6bmlg" Jan 27 14:29:47 crc kubenswrapper[4833]: I0127 14:29:47.144811 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-b5xnd" Jan 27 14:29:48 crc kubenswrapper[4833]: I0127 14:29:48.406108 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-j2jq9" Jan 27 14:29:48 crc kubenswrapper[4833]: I0127 14:29:48.446856 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854flztk" Jan 27 14:29:49 crc kubenswrapper[4833]: I0127 14:29:49.146562 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f7cb759dd-k25f5" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.177618 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm"] Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.180331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.183040 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.183462 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.198414 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm"] Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.266934 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-secret-volume\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.267303 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn87v\" (UniqueName: \"kubernetes.io/projected/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-kube-api-access-gn87v\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.267366 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-config-volume\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.368959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-config-volume\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.369091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-secret-volume\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.369178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn87v\" (UniqueName: \"kubernetes.io/projected/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-kube-api-access-gn87v\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.371041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-config-volume\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.389611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-secret-volume\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.399490 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn87v\" (UniqueName: \"kubernetes.io/projected/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-kube-api-access-gn87v\") pod \"collect-profiles-29492070-8xzgm\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.513126 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:00 crc kubenswrapper[4833]: I0127 14:30:00.756786 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm"] Jan 27 14:30:00 crc kubenswrapper[4833]: W0127 14:30:00.768215 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a87d6aa_97b8_4bd0_afe4_92f991e99a1a.slice/crio-680d13b9276fa4fc26e31e7bcd43c5f08e2efe016fcc48a66f79603f66cfa494 WatchSource:0}: Error finding container 680d13b9276fa4fc26e31e7bcd43c5f08e2efe016fcc48a66f79603f66cfa494: Status 404 returned error can't find the container with id 680d13b9276fa4fc26e31e7bcd43c5f08e2efe016fcc48a66f79603f66cfa494 Jan 27 14:30:01 crc kubenswrapper[4833]: I0127 14:30:01.275211 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" containerID="70582989ec55b8d9aedd06d847cb7342b6378b266543f07ba14999161c062d1a" exitCode=0 Jan 27 14:30:01 crc kubenswrapper[4833]: I0127 14:30:01.275251 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" event={"ID":"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a","Type":"ContainerDied","Data":"70582989ec55b8d9aedd06d847cb7342b6378b266543f07ba14999161c062d1a"} Jan 27 14:30:01 crc kubenswrapper[4833]: I0127 14:30:01.275286 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" event={"ID":"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a","Type":"ContainerStarted","Data":"680d13b9276fa4fc26e31e7bcd43c5f08e2efe016fcc48a66f79603f66cfa494"} Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.601038 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.703153 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-secret-volume\") pod \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.703231 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn87v\" (UniqueName: \"kubernetes.io/projected/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-kube-api-access-gn87v\") pod \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.703321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-config-volume\") pod \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\" (UID: \"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a\") " Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.704241 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" (UID: "9a87d6aa-97b8-4bd0-afe4-92f991e99a1a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.708195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-kube-api-access-gn87v" (OuterVolumeSpecName: "kube-api-access-gn87v") pod "9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" (UID: "9a87d6aa-97b8-4bd0-afe4-92f991e99a1a"). InnerVolumeSpecName "kube-api-access-gn87v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.709094 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" (UID: "9a87d6aa-97b8-4bd0-afe4-92f991e99a1a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.805378 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.805940 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:02 crc kubenswrapper[4833]: I0127 14:30:02.805968 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn87v\" (UniqueName: \"kubernetes.io/projected/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a-kube-api-access-gn87v\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:03 crc kubenswrapper[4833]: I0127 14:30:03.304876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" event={"ID":"9a87d6aa-97b8-4bd0-afe4-92f991e99a1a","Type":"ContainerDied","Data":"680d13b9276fa4fc26e31e7bcd43c5f08e2efe016fcc48a66f79603f66cfa494"} Jan 27 14:30:03 crc kubenswrapper[4833]: I0127 14:30:03.304933 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="680d13b9276fa4fc26e31e7bcd43c5f08e2efe016fcc48a66f79603f66cfa494" Jan 27 14:30:03 crc kubenswrapper[4833]: I0127 14:30:03.304969 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.112653 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ddpqk"] Jan 27 14:30:09 crc kubenswrapper[4833]: E0127 14:30:09.114533 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" containerName="collect-profiles" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.114639 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" containerName="collect-profiles" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.114876 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" containerName="collect-profiles" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.115904 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.119241 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.119530 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.119917 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.120119 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-m8rxf" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.134840 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ddpqk"] Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.172862 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t2dk7"] Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.174304 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.179062 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.188920 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t2dk7"] Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.200381 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgc5r\" (UniqueName: \"kubernetes.io/projected/a8964d82-960a-4ae0-b139-c63807edf22b-kube-api-access-bgc5r\") pod \"dnsmasq-dns-675f4bcbfc-ddpqk\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.200436 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8964d82-960a-4ae0-b139-c63807edf22b-config\") pod \"dnsmasq-dns-675f4bcbfc-ddpqk\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.301873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.301983 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwkf\" (UniqueName: \"kubernetes.io/projected/a3f76c0c-4be4-4630-887b-c14881562d9e-kube-api-access-xfwkf\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.302005 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgc5r\" (UniqueName: \"kubernetes.io/projected/a8964d82-960a-4ae0-b139-c63807edf22b-kube-api-access-bgc5r\") pod \"dnsmasq-dns-675f4bcbfc-ddpqk\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.302026 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-config\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.302140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8964d82-960a-4ae0-b139-c63807edf22b-config\") pod \"dnsmasq-dns-675f4bcbfc-ddpqk\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.303081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8964d82-960a-4ae0-b139-c63807edf22b-config\") pod \"dnsmasq-dns-675f4bcbfc-ddpqk\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.323667 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgc5r\" (UniqueName: \"kubernetes.io/projected/a8964d82-960a-4ae0-b139-c63807edf22b-kube-api-access-bgc5r\") pod \"dnsmasq-dns-675f4bcbfc-ddpqk\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.403240 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwkf\" (UniqueName: \"kubernetes.io/projected/a3f76c0c-4be4-4630-887b-c14881562d9e-kube-api-access-xfwkf\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.403313 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-config\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.403406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.404223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.404506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-config\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.425901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwkf\" (UniqueName: \"kubernetes.io/projected/a3f76c0c-4be4-4630-887b-c14881562d9e-kube-api-access-xfwkf\") pod \"dnsmasq-dns-78dd6ddcc-t2dk7\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.434816 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.489299 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.731310 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t2dk7"] Jan 27 14:30:09 crc kubenswrapper[4833]: I0127 14:30:09.852784 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ddpqk"] Jan 27 14:30:09 crc kubenswrapper[4833]: W0127 14:30:09.854617 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8964d82_960a_4ae0_b139_c63807edf22b.slice/crio-a8fd640b88f93eedb19e9b4f50a2b2d53eb415703b4df78993e6b33fd62868d2 WatchSource:0}: Error finding container a8fd640b88f93eedb19e9b4f50a2b2d53eb415703b4df78993e6b33fd62868d2: Status 404 returned error can't find the container with id a8fd640b88f93eedb19e9b4f50a2b2d53eb415703b4df78993e6b33fd62868d2 Jan 27 14:30:10 crc kubenswrapper[4833]: I0127 14:30:10.356453 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" event={"ID":"a8964d82-960a-4ae0-b139-c63807edf22b","Type":"ContainerStarted","Data":"a8fd640b88f93eedb19e9b4f50a2b2d53eb415703b4df78993e6b33fd62868d2"} Jan 27 14:30:10 crc kubenswrapper[4833]: I0127 14:30:10.357492 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" event={"ID":"a3f76c0c-4be4-4630-887b-c14881562d9e","Type":"ContainerStarted","Data":"64628e135327fd1cc359ca3554aa6f59f0c3e75b2c46f18f3b5bbe1a091938b2"} Jan 27 14:30:11 crc kubenswrapper[4833]: I0127 14:30:11.917533 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ddpqk"] Jan 27 14:30:11 crc kubenswrapper[4833]: I0127 14:30:11.943529 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p2vm"] Jan 27 14:30:11 crc kubenswrapper[4833]: I0127 14:30:11.944827 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:11 crc kubenswrapper[4833]: I0127 14:30:11.966020 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p2vm"] Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.045232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6jhc\" (UniqueName: \"kubernetes.io/projected/c71020a9-7b34-40b0-843b-3779371376b0-kube-api-access-k6jhc\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.045281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-config\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.045322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.147011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6jhc\" (UniqueName: \"kubernetes.io/projected/c71020a9-7b34-40b0-843b-3779371376b0-kube-api-access-k6jhc\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.147070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-config\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.147134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.148219 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.148410 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-config\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.170410 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6jhc\" (UniqueName: \"kubernetes.io/projected/c71020a9-7b34-40b0-843b-3779371376b0-kube-api-access-k6jhc\") pod \"dnsmasq-dns-666b6646f7-9p2vm\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.203329 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t2dk7"] Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.227995 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x8v54"] Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.229065 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.240236 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x8v54"] Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.263978 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.353930 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.354357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-config\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.354378 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjrzp\" (UniqueName: \"kubernetes.io/projected/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-kube-api-access-xjrzp\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.455741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.455846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-config\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.455870 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjrzp\" (UniqueName: \"kubernetes.io/projected/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-kube-api-access-xjrzp\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.457041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.457494 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-config\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.477398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjrzp\" (UniqueName: \"kubernetes.io/projected/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-kube-api-access-xjrzp\") pod \"dnsmasq-dns-57d769cc4f-x8v54\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:12 crc kubenswrapper[4833]: I0127 14:30:12.548745 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.092898 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.095204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.097610 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.097809 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.097965 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.098153 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.098701 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.098755 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.101482 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bpn7b" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.113431 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266437 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjkv\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-kube-api-access-gkjkv\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266501 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b143505-7ef8-4e88-b977-8fc8e3471474-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266548 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b143505-7ef8-4e88-b977-8fc8e3471474-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266657 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266731 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266748 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.266767 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.370849 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.379116 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.380340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.384537 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.384883 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b143505-7ef8-4e88-b977-8fc8e3471474-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385258 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385550 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385721 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385818 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.385981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.386228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjkv\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-kube-api-access-gkjkv\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.386387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b143505-7ef8-4e88-b977-8fc8e3471474-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.384886 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.388220 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.391652 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.391688 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.388280 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.388417 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tgbfx" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.388487 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.388713 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.392864 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.395967 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.400427 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-config-data\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.402968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.403628 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.410644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.421386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.430336 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b143505-7ef8-4e88-b977-8fc8e3471474-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.453746 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.454337 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b143505-7ef8-4e88-b977-8fc8e3471474-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.454408 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjkv\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-kube-api-access-gkjkv\") pod \"rabbitmq-server-0\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492312 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492389 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492409 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492548 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492576 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.492648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrspt\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-kube-api-access-vrspt\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593804 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrspt\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-kube-api-access-vrspt\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593956 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.593984 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.594001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.594230 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.595659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.595891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.595933 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.596463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.596533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.599382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.599752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.599972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.602703 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.615381 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrspt\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-kube-api-access-vrspt\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.618057 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.723293 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:30:13 crc kubenswrapper[4833]: I0127 14:30:13.800350 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.545929 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.547498 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.549752 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sbkgp" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.550699 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.550860 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.550897 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.555425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.569379 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791387 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c797d35-1d15-4eed-a88c-20fd3aa64b91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791528 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c797d35-1d15-4eed-a88c-20fd3aa64b91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791587 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvgr\" (UniqueName: \"kubernetes.io/projected/4c797d35-1d15-4eed-a88c-20fd3aa64b91-kube-api-access-7xvgr\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791626 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c797d35-1d15-4eed-a88c-20fd3aa64b91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.791685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892551 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xvgr\" (UniqueName: \"kubernetes.io/projected/4c797d35-1d15-4eed-a88c-20fd3aa64b91-kube-api-access-7xvgr\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892625 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c797d35-1d15-4eed-a88c-20fd3aa64b91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892653 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892742 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c797d35-1d15-4eed-a88c-20fd3aa64b91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.892779 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c797d35-1d15-4eed-a88c-20fd3aa64b91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.893533 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.893750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.893886 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.894784 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c797d35-1d15-4eed-a88c-20fd3aa64b91-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.894823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c797d35-1d15-4eed-a88c-20fd3aa64b91-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.902002 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c797d35-1d15-4eed-a88c-20fd3aa64b91-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.906238 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c797d35-1d15-4eed-a88c-20fd3aa64b91-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.913848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:14 crc kubenswrapper[4833]: I0127 14:30:14.933200 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xvgr\" (UniqueName: \"kubernetes.io/projected/4c797d35-1d15-4eed-a88c-20fd3aa64b91-kube-api-access-7xvgr\") pod \"openstack-galera-0\" (UID: \"4c797d35-1d15-4eed-a88c-20fd3aa64b91\") " pod="openstack/openstack-galera-0" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.164820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.973579 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.975216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.977478 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-959st" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.978324 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.979045 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.979175 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 14:30:15 crc kubenswrapper[4833]: I0127 14:30:15.987784 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a4a22294-2921-44ee-bdf0-41c631d2962c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a22294-2921-44ee-bdf0-41c631d2962c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110369 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110393 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz87m\" (UniqueName: \"kubernetes.io/projected/a4a22294-2921-44ee-bdf0-41c631d2962c-kube-api-access-zz87m\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110412 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4a22294-2921-44ee-bdf0-41c631d2962c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.110602 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211502 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a4a22294-2921-44ee-bdf0-41c631d2962c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a22294-2921-44ee-bdf0-41c631d2962c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211646 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz87m\" (UniqueName: \"kubernetes.io/projected/a4a22294-2921-44ee-bdf0-41c631d2962c-kube-api-access-zz87m\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211664 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4a22294-2921-44ee-bdf0-41c631d2962c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.211693 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.212619 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.213064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a4a22294-2921-44ee-bdf0-41c631d2962c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.213363 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.214723 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.216078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4a22294-2921-44ee-bdf0-41c631d2962c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.218542 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a22294-2921-44ee-bdf0-41c631d2962c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.232735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4a22294-2921-44ee-bdf0-41c631d2962c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.239646 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.240228 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz87m\" (UniqueName: \"kubernetes.io/projected/a4a22294-2921-44ee-bdf0-41c631d2962c-kube-api-access-zz87m\") pod \"openstack-cell1-galera-0\" (UID: \"a4a22294-2921-44ee-bdf0-41c631d2962c\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.299558 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.300802 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.303298 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.303772 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7fpjg" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.303926 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.314740 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.316609 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.415906 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf31f7d3-86f9-490a-806a-3944c7d60c10-kolla-config\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.416166 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf31f7d3-86f9-490a-806a-3944c7d60c10-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.416261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf31f7d3-86f9-490a-806a-3944c7d60c10-config-data\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.416367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7swq\" (UniqueName: \"kubernetes.io/projected/cf31f7d3-86f9-490a-806a-3944c7d60c10-kube-api-access-b7swq\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.416436 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf31f7d3-86f9-490a-806a-3944c7d60c10-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.518059 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf31f7d3-86f9-490a-806a-3944c7d60c10-kolla-config\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.518115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf31f7d3-86f9-490a-806a-3944c7d60c10-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.518156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf31f7d3-86f9-490a-806a-3944c7d60c10-config-data\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.518205 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7swq\" (UniqueName: \"kubernetes.io/projected/cf31f7d3-86f9-490a-806a-3944c7d60c10-kube-api-access-b7swq\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.518236 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf31f7d3-86f9-490a-806a-3944c7d60c10-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.519163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf31f7d3-86f9-490a-806a-3944c7d60c10-kolla-config\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.520328 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf31f7d3-86f9-490a-806a-3944c7d60c10-config-data\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.523034 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf31f7d3-86f9-490a-806a-3944c7d60c10-combined-ca-bundle\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.524501 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf31f7d3-86f9-490a-806a-3944c7d60c10-memcached-tls-certs\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.552122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7swq\" (UniqueName: \"kubernetes.io/projected/cf31f7d3-86f9-490a-806a-3944c7d60c10-kube-api-access-b7swq\") pod \"memcached-0\" (UID: \"cf31f7d3-86f9-490a-806a-3944c7d60c10\") " pod="openstack/memcached-0" Jan 27 14:30:16 crc kubenswrapper[4833]: I0127 14:30:16.629814 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.096605 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.097898 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.099897 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qhvv9" Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.119330 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.247018 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx9k\" (UniqueName: \"kubernetes.io/projected/534b7701-45c8-403b-8601-3e22e9177c61-kube-api-access-czx9k\") pod \"kube-state-metrics-0\" (UID: \"534b7701-45c8-403b-8601-3e22e9177c61\") " pod="openstack/kube-state-metrics-0" Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.348076 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czx9k\" (UniqueName: \"kubernetes.io/projected/534b7701-45c8-403b-8601-3e22e9177c61-kube-api-access-czx9k\") pod \"kube-state-metrics-0\" (UID: \"534b7701-45c8-403b-8601-3e22e9177c61\") " pod="openstack/kube-state-metrics-0" Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.381025 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czx9k\" (UniqueName: \"kubernetes.io/projected/534b7701-45c8-403b-8601-3e22e9177c61-kube-api-access-czx9k\") pod \"kube-state-metrics-0\" (UID: \"534b7701-45c8-403b-8601-3e22e9177c61\") " pod="openstack/kube-state-metrics-0" Jan 27 14:30:18 crc kubenswrapper[4833]: I0127 14:30:18.424075 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.509954 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.513031 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.514670 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.514962 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.515005 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.515393 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.515646 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.516494 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.522398 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.525863 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.526538 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-r6fq9" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666219 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666267 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666487 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666761 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdpw\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-kube-api-access-6tdpw\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666887 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.666997 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768008 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tdpw\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-kube-api-access-6tdpw\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768096 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768164 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768203 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768236 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768269 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.768942 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.769680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.770400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.773638 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.773652 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.773684 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae037006a250a748df6e15e9e2e300ecef710dd9481d48fc8efc4ea8fd9ab428/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.773692 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.774536 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.776739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.781196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.785662 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tdpw\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-kube-api-access-6tdpw\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.813057 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:19 crc kubenswrapper[4833]: I0127 14:30:19.849263 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.431763 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6584j"] Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.433401 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.441038 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.441376 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hx6xh" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.443284 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6584j"] Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.444573 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.483506 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9xc7c"] Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.486015 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501473 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bea80e-a86d-40c6-b72f-9bab663cc6ea-ovn-controller-tls-certs\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501521 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-log-ovn\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-run-ovn\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-run\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501645 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bea80e-a86d-40c6-b72f-9bab663cc6ea-scripts\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501668 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4hp\" (UniqueName: \"kubernetes.io/projected/71bea80e-a86d-40c6-b72f-9bab663cc6ea-kube-api-access-dv4hp\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.501701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bea80e-a86d-40c6-b72f-9bab663cc6ea-combined-ca-bundle\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.502283 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9xc7c"] Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.602997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-run\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bea80e-a86d-40c6-b72f-9bab663cc6ea-scripts\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603093 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv4hp\" (UniqueName: \"kubernetes.io/projected/71bea80e-a86d-40c6-b72f-9bab663cc6ea-kube-api-access-dv4hp\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603122 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7nn\" (UniqueName: \"kubernetes.io/projected/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-kube-api-access-6j7nn\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bea80e-a86d-40c6-b72f-9bab663cc6ea-combined-ca-bundle\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603199 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-lib\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603251 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-run\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603286 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-log\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-etc-ovs\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bea80e-a86d-40c6-b72f-9bab663cc6ea-ovn-controller-tls-certs\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-log-ovn\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-run-ovn\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.603510 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-scripts\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.604845 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-log-ovn\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.604983 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-run-ovn\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.605045 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/71bea80e-a86d-40c6-b72f-9bab663cc6ea-var-run\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.607978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/71bea80e-a86d-40c6-b72f-9bab663cc6ea-ovn-controller-tls-certs\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.608051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71bea80e-a86d-40c6-b72f-9bab663cc6ea-combined-ca-bundle\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.608209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71bea80e-a86d-40c6-b72f-9bab663cc6ea-scripts\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.623818 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv4hp\" (UniqueName: \"kubernetes.io/projected/71bea80e-a86d-40c6-b72f-9bab663cc6ea-kube-api-access-dv4hp\") pod \"ovn-controller-6584j\" (UID: \"71bea80e-a86d-40c6-b72f-9bab663cc6ea\") " pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705171 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7nn\" (UniqueName: \"kubernetes.io/projected/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-kube-api-access-6j7nn\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-lib\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705646 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-run\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-log\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-etc-ovs\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-scripts\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.705815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-run\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.706063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-etc-ovs\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.706194 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-lib\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.706270 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-var-log\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.707666 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-scripts\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.727210 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7nn\" (UniqueName: \"kubernetes.io/projected/9c97e6a0-c4b8-4f8d-ac90-d28e93a48030-kube-api-access-6j7nn\") pod \"ovn-controller-ovs-9xc7c\" (UID: \"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030\") " pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.749416 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6584j" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.802775 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.949496 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.951158 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.955606 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zkc57" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.955870 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.956075 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.962637 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.962966 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 14:30:21 crc kubenswrapper[4833]: I0127 14:30:21.967840 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-config\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p5cp\" (UniqueName: \"kubernetes.io/projected/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-kube-api-access-5p5cp\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.010568 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.113880 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.113993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.114095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.114184 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.114248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-config\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.114296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p5cp\" (UniqueName: \"kubernetes.io/projected/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-kube-api-access-5p5cp\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.114335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.114371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.117944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.118750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.119000 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.119958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.120487 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-config\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.120543 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.126178 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.137333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.138049 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p5cp\" (UniqueName: \"kubernetes.io/projected/4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1-kube-api-access-5p5cp\") pod \"ovsdbserver-nb-0\" (UID: \"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:22 crc kubenswrapper[4833]: I0127 14:30:22.285412 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:24 crc kubenswrapper[4833]: I0127 14:30:24.051660 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:30:25 crc kubenswrapper[4833]: E0127 14:30:25.237185 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 14:30:25 crc kubenswrapper[4833]: E0127 14:30:25.237752 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfwkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-t2dk7_openstack(a3f76c0c-4be4-4630-887b-c14881562d9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4833]: E0127 14:30:25.239331 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" podUID="a3f76c0c-4be4-4630-887b-c14881562d9e" Jan 27 14:30:25 crc kubenswrapper[4833]: E0127 14:30:25.284093 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 14:30:25 crc kubenswrapper[4833]: E0127 14:30:25.286339 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgc5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-ddpqk_openstack(a8964d82-960a-4ae0-b139-c63807edf22b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4833]: E0127 14:30:25.287528 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" podUID="a8964d82-960a-4ae0-b139-c63807edf22b" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.480865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea","Type":"ContainerStarted","Data":"deda8311b2cb01f19f425ccdda0ca2010357950ffaccd8108de2e5c724531ac3"} Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.657641 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x8v54"] Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.676952 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.678196 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.683660 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.683717 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.684155 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qmwg8" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.684383 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.690564 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825390 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825415 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825539 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825567 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-config\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prql\" (UniqueName: \"kubernetes.io/projected/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-kube-api-access-7prql\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.825606 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.910773 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.925010 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.926964 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927089 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-config\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927173 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prql\" (UniqueName: \"kubernetes.io/projected/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-kube-api-access-7prql\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927217 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.927316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.928769 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.929101 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.929632 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.934587 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.934895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.938539 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-config\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.951244 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.958243 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prql\" (UniqueName: \"kubernetes.io/projected/22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c-kube-api-access-7prql\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:25 crc kubenswrapper[4833]: I0127 14:30:25.977069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.007643 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.027961 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-config\") pod \"a3f76c0c-4be4-4630-887b-c14881562d9e\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.028178 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgc5r\" (UniqueName: \"kubernetes.io/projected/a8964d82-960a-4ae0-b139-c63807edf22b-kube-api-access-bgc5r\") pod \"a8964d82-960a-4ae0-b139-c63807edf22b\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.028228 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-dns-svc\") pod \"a3f76c0c-4be4-4630-887b-c14881562d9e\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.028251 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8964d82-960a-4ae0-b139-c63807edf22b-config\") pod \"a8964d82-960a-4ae0-b139-c63807edf22b\" (UID: \"a8964d82-960a-4ae0-b139-c63807edf22b\") " Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.028265 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwkf\" (UniqueName: \"kubernetes.io/projected/a3f76c0c-4be4-4630-887b-c14881562d9e-kube-api-access-xfwkf\") pod \"a3f76c0c-4be4-4630-887b-c14881562d9e\" (UID: \"a3f76c0c-4be4-4630-887b-c14881562d9e\") " Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.028964 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-config" (OuterVolumeSpecName: "config") pod "a3f76c0c-4be4-4630-887b-c14881562d9e" (UID: "a3f76c0c-4be4-4630-887b-c14881562d9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.029291 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3f76c0c-4be4-4630-887b-c14881562d9e" (UID: "a3f76c0c-4be4-4630-887b-c14881562d9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.029885 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8964d82-960a-4ae0-b139-c63807edf22b-config" (OuterVolumeSpecName: "config") pod "a8964d82-960a-4ae0-b139-c63807edf22b" (UID: "a8964d82-960a-4ae0-b139-c63807edf22b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.033917 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f76c0c-4be4-4630-887b-c14881562d9e-kube-api-access-xfwkf" (OuterVolumeSpecName: "kube-api-access-xfwkf") pod "a3f76c0c-4be4-4630-887b-c14881562d9e" (UID: "a3f76c0c-4be4-4630-887b-c14881562d9e"). InnerVolumeSpecName "kube-api-access-xfwkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.036652 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8964d82-960a-4ae0-b139-c63807edf22b-kube-api-access-bgc5r" (OuterVolumeSpecName: "kube-api-access-bgc5r") pod "a8964d82-960a-4ae0-b139-c63807edf22b" (UID: "a8964d82-960a-4ae0-b139-c63807edf22b"). InnerVolumeSpecName "kube-api-access-bgc5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.083648 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.113062 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.130211 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8964d82-960a-4ae0-b139-c63807edf22b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.130542 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwkf\" (UniqueName: \"kubernetes.io/projected/a3f76c0c-4be4-4630-887b-c14881562d9e-kube-api-access-xfwkf\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.130555 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.130564 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgc5r\" (UniqueName: \"kubernetes.io/projected/a8964d82-960a-4ae0-b139-c63807edf22b-kube-api-access-bgc5r\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.132212 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3f76c0c-4be4-4630-887b-c14881562d9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4833]: W0127 14:30:26.133858 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc71020a9_7b34_40b0_843b_3779371376b0.slice/crio-248eafa721f5a13fb0ee42292e2a1530ca3dabe2184179fa5bc2525957eded76 WatchSource:0}: Error finding container 248eafa721f5a13fb0ee42292e2a1530ca3dabe2184179fa5bc2525957eded76: Status 404 returned error can't find the container with id 248eafa721f5a13fb0ee42292e2a1530ca3dabe2184179fa5bc2525957eded76 Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.140402 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.148902 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p2vm"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.156128 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.305020 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6584j"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.321072 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.352369 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.449459 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9xc7c"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.494525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6584j" event={"ID":"71bea80e-a86d-40c6-b72f-9bab663cc6ea","Type":"ContainerStarted","Data":"7a3b4d4298d62964021f38e1d5594b545f0d19a329338aea67ad7f7878eb170e"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.496975 4833 generic.go:334] "Generic (PLEG): container finished" podID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerID="05ecbb40f66a4a715570ca91504496a702bdb79127c507c9f666c387479070d5" exitCode=0 Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.497018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" event={"ID":"8f9b4bbe-877b-49f5-b63a-f3a4b10553be","Type":"ContainerDied","Data":"05ecbb40f66a4a715570ca91504496a702bdb79127c507c9f666c387479070d5"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.497079 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" event={"ID":"8f9b4bbe-877b-49f5-b63a-f3a4b10553be","Type":"ContainerStarted","Data":"81a943c32d097703e0435bc0f51ccbd986ec59c42d460aa51317f26f153baf21"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.499608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"534b7701-45c8-403b-8601-3e22e9177c61","Type":"ContainerStarted","Data":"3483705b20007cf08a33addeb5147c3a3601d47e718c824101f5cd77f483666d"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.505497 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b143505-7ef8-4e88-b977-8fc8e3471474","Type":"ContainerStarted","Data":"31b253b2b1c2e0b6d64c16d152ba94aa9fd26269dd18f8a38cb93fa23543f067"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.508711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" event={"ID":"a3f76c0c-4be4-4630-887b-c14881562d9e","Type":"ContainerDied","Data":"64628e135327fd1cc359ca3554aa6f59f0c3e75b2c46f18f3b5bbe1a091938b2"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.508795 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t2dk7" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.513021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" event={"ID":"c71020a9-7b34-40b0-843b-3779371376b0","Type":"ContainerStarted","Data":"248eafa721f5a13fb0ee42292e2a1530ca3dabe2184179fa5bc2525957eded76"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.515305 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cf31f7d3-86f9-490a-806a-3944c7d60c10","Type":"ContainerStarted","Data":"75ad63d59a44ff9002f17acfbb135f81a6e76bc5b1eb041e87b432ee0d650fd7"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.529029 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4a22294-2921-44ee-bdf0-41c631d2962c","Type":"ContainerStarted","Data":"96cf015e2d3ddcfc7346d089dd6db3eb2e07906df6b278c93a6f1116faf62b70"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.533187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerStarted","Data":"b727daeb7d7be2e39649beb0b8e5f9598b930ec3499b760a63cd68db139076b0"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.535554 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" event={"ID":"a8964d82-960a-4ae0-b139-c63807edf22b","Type":"ContainerDied","Data":"a8fd640b88f93eedb19e9b4f50a2b2d53eb415703b4df78993e6b33fd62868d2"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.535698 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-ddpqk" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.537964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c797d35-1d15-4eed-a88c-20fd3aa64b91","Type":"ContainerStarted","Data":"a341d4bfc6d38a38165828d9350de06fa417cc28d9cd53a6659d2deea1094d70"} Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.576342 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t2dk7"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.583882 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t2dk7"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.615325 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ddpqk"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.621423 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-ddpqk"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.654674 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:30:26 crc kubenswrapper[4833]: W0127 14:30:26.680986 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22b3a12f_a1eb_43c2_8ce8_3d0aa9b8d99c.slice/crio-7c97771a4bb098e21830cf3585bdd3ec9816eb1d864aa028734a68df1cc15e07 WatchSource:0}: Error finding container 7c97771a4bb098e21830cf3585bdd3ec9816eb1d864aa028734a68df1cc15e07: Status 404 returned error can't find the container with id 7c97771a4bb098e21830cf3585bdd3ec9816eb1d864aa028734a68df1cc15e07 Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.742691 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-695lh"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.744110 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.756912 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.774858 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-695lh"] Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.857486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmzq\" (UniqueName: \"kubernetes.io/projected/a8938ddd-5e46-4314-af72-13a83905b6c4-kube-api-access-kvmzq\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.857524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a8938ddd-5e46-4314-af72-13a83905b6c4-ovs-rundir\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.857566 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8938ddd-5e46-4314-af72-13a83905b6c4-combined-ca-bundle\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.857625 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8938ddd-5e46-4314-af72-13a83905b6c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.857673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8938ddd-5e46-4314-af72-13a83905b6c4-config\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.857776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a8938ddd-5e46-4314-af72-13a83905b6c4-ovn-rundir\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a8938ddd-5e46-4314-af72-13a83905b6c4-ovn-rundir\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959131 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmzq\" (UniqueName: \"kubernetes.io/projected/a8938ddd-5e46-4314-af72-13a83905b6c4-kube-api-access-kvmzq\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a8938ddd-5e46-4314-af72-13a83905b6c4-ovs-rundir\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8938ddd-5e46-4314-af72-13a83905b6c4-combined-ca-bundle\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8938ddd-5e46-4314-af72-13a83905b6c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959262 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8938ddd-5e46-4314-af72-13a83905b6c4-config\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.959791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a8938ddd-5e46-4314-af72-13a83905b6c4-ovn-rundir\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.960018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a8938ddd-5e46-4314-af72-13a83905b6c4-ovs-rundir\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.960288 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8938ddd-5e46-4314-af72-13a83905b6c4-config\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.966428 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8938ddd-5e46-4314-af72-13a83905b6c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.978688 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8938ddd-5e46-4314-af72-13a83905b6c4-combined-ca-bundle\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:26 crc kubenswrapper[4833]: I0127 14:30:26.991934 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmzq\" (UniqueName: \"kubernetes.io/projected/a8938ddd-5e46-4314-af72-13a83905b6c4-kube-api-access-kvmzq\") pod \"ovn-controller-metrics-695lh\" (UID: \"a8938ddd-5e46-4314-af72-13a83905b6c4\") " pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.027499 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p2vm"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.059391 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-skt7p"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.060723 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.062792 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.069621 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-skt7p"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.110037 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-695lh" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.162025 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k849\" (UniqueName: \"kubernetes.io/projected/040032d1-c03f-435a-9977-acdf3dde9911-kube-api-access-9k849\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.162111 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.162156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.162182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-config\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.165982 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.242489 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f76c0c-4be4-4630-887b-c14881562d9e" path="/var/lib/kubelet/pods/a3f76c0c-4be4-4630-887b-c14881562d9e/volumes" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.242845 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8964d82-960a-4ae0-b139-c63807edf22b" path="/var/lib/kubelet/pods/a8964d82-960a-4ae0-b139-c63807edf22b/volumes" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.268312 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.269343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-config\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.269290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.269587 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k849\" (UniqueName: \"kubernetes.io/projected/040032d1-c03f-435a-9977-acdf3dde9911-kube-api-access-9k849\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.269674 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.270746 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-config\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.271145 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.302670 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k849\" (UniqueName: \"kubernetes.io/projected/040032d1-c03f-435a-9977-acdf3dde9911-kube-api-access-9k849\") pod \"dnsmasq-dns-6bc7876d45-skt7p\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.396793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.554401 4833 generic.go:334] "Generic (PLEG): container finished" podID="c71020a9-7b34-40b0-843b-3779371376b0" containerID="ef3dd1ea32fe20614c881a629acba03bd494153f62b89991deeb78eeb9c12f05" exitCode=0 Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.554539 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" event={"ID":"c71020a9-7b34-40b0-843b-3779371376b0","Type":"ContainerDied","Data":"ef3dd1ea32fe20614c881a629acba03bd494153f62b89991deeb78eeb9c12f05"} Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.563769 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xc7c" event={"ID":"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030","Type":"ContainerStarted","Data":"65444abed6fdaf8a1ab3693136310e07562ff9d1dc1e9db5c84d2fff94ae7010"} Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.572608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" event={"ID":"8f9b4bbe-877b-49f5-b63a-f3a4b10553be","Type":"ContainerStarted","Data":"13f6029fc609eac11d3ceacc75a21dcc5c1419b85631888e1767f92a8d8f7db3"} Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.573490 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.577294 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c","Type":"ContainerStarted","Data":"7c97771a4bb098e21830cf3585bdd3ec9816eb1d864aa028734a68df1cc15e07"} Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.614877 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" podStartSLOduration=15.101227413 podStartE2EDuration="15.614854274s" podCreationTimestamp="2026-01-27 14:30:12 +0000 UTC" firstStartedPulling="2026-01-27 14:30:25.674274366 +0000 UTC m=+1127.325598768" lastFinishedPulling="2026-01-27 14:30:26.187901227 +0000 UTC m=+1127.839225629" observedRunningTime="2026-01-27 14:30:27.604415094 +0000 UTC m=+1129.255739496" watchObservedRunningTime="2026-01-27 14:30:27.614854274 +0000 UTC m=+1129.266178676" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.643472 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x8v54"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.675817 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-xbqq4"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.677120 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.678710 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.704902 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xbqq4"] Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.776924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb58t\" (UniqueName: \"kubernetes.io/projected/a0486055-d9f4-43dd-a25e-16549d574740-kube-api-access-nb58t\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.777037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.777092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-dns-svc\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.777162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.777185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-config\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.878137 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.878193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-dns-svc\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.878287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.878308 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-config\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.878331 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb58t\" (UniqueName: \"kubernetes.io/projected/a0486055-d9f4-43dd-a25e-16549d574740-kube-api-access-nb58t\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.880235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-dns-svc\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.880827 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-config\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.880893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.880955 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: I0127 14:30:27.894257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb58t\" (UniqueName: \"kubernetes.io/projected/a0486055-d9f4-43dd-a25e-16549d574740-kube-api-access-nb58t\") pod \"dnsmasq-dns-8554648995-xbqq4\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:27 crc kubenswrapper[4833]: W0127 14:30:27.924678 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbbf9ae_c9e5_4bcf_9389_52eb0b7d5ae1.slice/crio-6b9e69ae787226c48da29e7958f00db08780c0fabc7103da5407b443caba9cba WatchSource:0}: Error finding container 6b9e69ae787226c48da29e7958f00db08780c0fabc7103da5407b443caba9cba: Status 404 returned error can't find the container with id 6b9e69ae787226c48da29e7958f00db08780c0fabc7103da5407b443caba9cba Jan 27 14:30:28 crc kubenswrapper[4833]: I0127 14:30:28.001018 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:28 crc kubenswrapper[4833]: I0127 14:30:28.599892 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1","Type":"ContainerStarted","Data":"6b9e69ae787226c48da29e7958f00db08780c0fabc7103da5407b443caba9cba"} Jan 27 14:30:29 crc kubenswrapper[4833]: I0127 14:30:29.607994 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="dnsmasq-dns" containerID="cri-o://13f6029fc609eac11d3ceacc75a21dcc5c1419b85631888e1767f92a8d8f7db3" gracePeriod=10 Jan 27 14:30:30 crc kubenswrapper[4833]: I0127 14:30:30.620203 4833 generic.go:334] "Generic (PLEG): container finished" podID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerID="13f6029fc609eac11d3ceacc75a21dcc5c1419b85631888e1767f92a8d8f7db3" exitCode=0 Jan 27 14:30:30 crc kubenswrapper[4833]: I0127 14:30:30.620256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" event={"ID":"8f9b4bbe-877b-49f5-b63a-f3a4b10553be","Type":"ContainerDied","Data":"13f6029fc609eac11d3ceacc75a21dcc5c1419b85631888e1767f92a8d8f7db3"} Jan 27 14:30:32 crc kubenswrapper[4833]: I0127 14:30:32.260711 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:30:32 crc kubenswrapper[4833]: I0127 14:30:32.261988 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:30:32 crc kubenswrapper[4833]: I0127 14:30:32.551597 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.104:5353: connect: connection refused" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.354685 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-695lh"] Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.508631 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.629839 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-config\") pod \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.630297 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjrzp\" (UniqueName: \"kubernetes.io/projected/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-kube-api-access-xjrzp\") pod \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.630492 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-dns-svc\") pod \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\" (UID: \"8f9b4bbe-877b-49f5-b63a-f3a4b10553be\") " Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.647636 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-kube-api-access-xjrzp" (OuterVolumeSpecName: "kube-api-access-xjrzp") pod "8f9b4bbe-877b-49f5-b63a-f3a4b10553be" (UID: "8f9b4bbe-877b-49f5-b63a-f3a4b10553be"). InnerVolumeSpecName "kube-api-access-xjrzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.684281 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" event={"ID":"8f9b4bbe-877b-49f5-b63a-f3a4b10553be","Type":"ContainerDied","Data":"81a943c32d097703e0435bc0f51ccbd986ec59c42d460aa51317f26f153baf21"} Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.684328 4833 scope.go:117] "RemoveContainer" containerID="13f6029fc609eac11d3ceacc75a21dcc5c1419b85631888e1767f92a8d8f7db3" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.684430 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-x8v54" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.691477 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-695lh" event={"ID":"a8938ddd-5e46-4314-af72-13a83905b6c4","Type":"ContainerStarted","Data":"52b85ef4ed70b0e4584717b93b97020409d184472f7f8e8519b991fac9db5310"} Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.693222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f9b4bbe-877b-49f5-b63a-f3a4b10553be" (UID: "8f9b4bbe-877b-49f5-b63a-f3a4b10553be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.706242 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-config" (OuterVolumeSpecName: "config") pod "8f9b4bbe-877b-49f5-b63a-f3a4b10553be" (UID: "8f9b4bbe-877b-49f5-b63a-f3a4b10553be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.711635 4833 scope.go:117] "RemoveContainer" containerID="05ecbb40f66a4a715570ca91504496a702bdb79127c507c9f666c387479070d5" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.733364 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.733402 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:34 crc kubenswrapper[4833]: I0127 14:30:34.733415 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjrzp\" (UniqueName: \"kubernetes.io/projected/8f9b4bbe-877b-49f5-b63a-f3a4b10553be-kube-api-access-xjrzp\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.096247 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x8v54"] Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.109307 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-x8v54"] Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.221378 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" path="/var/lib/kubelet/pods/8f9b4bbe-877b-49f5-b63a-f3a4b10553be/volumes" Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.421511 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-skt7p"] Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.434921 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xbqq4"] Jan 27 14:30:35 crc kubenswrapper[4833]: W0127 14:30:35.617310 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0486055_d9f4_43dd_a25e_16549d574740.slice/crio-8263d235e37deb72409636c3b7bf917783b2880dacedf3148d33a2eedd1a67e3 WatchSource:0}: Error finding container 8263d235e37deb72409636c3b7bf917783b2880dacedf3148d33a2eedd1a67e3: Status 404 returned error can't find the container with id 8263d235e37deb72409636c3b7bf917783b2880dacedf3148d33a2eedd1a67e3 Jan 27 14:30:35 crc kubenswrapper[4833]: W0127 14:30:35.619352 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod040032d1_c03f_435a_9977_acdf3dde9911.slice/crio-4d867dcab81a204536808e0fde9088b5530d6cc134802e992d8c4ba994c6562e WatchSource:0}: Error finding container 4d867dcab81a204536808e0fde9088b5530d6cc134802e992d8c4ba994c6562e: Status 404 returned error can't find the container with id 4d867dcab81a204536808e0fde9088b5530d6cc134802e992d8c4ba994c6562e Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.700917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" event={"ID":"040032d1-c03f-435a-9977-acdf3dde9911","Type":"ContainerStarted","Data":"4d867dcab81a204536808e0fde9088b5530d6cc134802e992d8c4ba994c6562e"} Jan 27 14:30:35 crc kubenswrapper[4833]: I0127 14:30:35.703573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xbqq4" event={"ID":"a0486055-d9f4-43dd-a25e-16549d574740","Type":"ContainerStarted","Data":"8263d235e37deb72409636c3b7bf917783b2880dacedf3148d33a2eedd1a67e3"} Jan 27 14:30:36 crc kubenswrapper[4833]: I0127 14:30:36.713324 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4a22294-2921-44ee-bdf0-41c631d2962c","Type":"ContainerStarted","Data":"06b5dd94eca329fcbc711247f854ab8d49acaf5623fd08a44f1388f2f08c8f2d"} Jan 27 14:30:36 crc kubenswrapper[4833]: I0127 14:30:36.716646 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" event={"ID":"c71020a9-7b34-40b0-843b-3779371376b0","Type":"ContainerStarted","Data":"197b7af00e9036ec95d88852af1f82ac067bf915f7caf7330aa4ad1a7b84785e"} Jan 27 14:30:36 crc kubenswrapper[4833]: I0127 14:30:36.716882 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:36 crc kubenswrapper[4833]: I0127 14:30:36.716957 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" podUID="c71020a9-7b34-40b0-843b-3779371376b0" containerName="dnsmasq-dns" containerID="cri-o://197b7af00e9036ec95d88852af1f82ac067bf915f7caf7330aa4ad1a7b84785e" gracePeriod=10 Jan 27 14:30:36 crc kubenswrapper[4833]: I0127 14:30:36.758089 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" podStartSLOduration=25.271201888 podStartE2EDuration="25.7580653s" podCreationTimestamp="2026-01-27 14:30:11 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.136150643 +0000 UTC m=+1127.787475035" lastFinishedPulling="2026-01-27 14:30:26.623014045 +0000 UTC m=+1128.274338447" observedRunningTime="2026-01-27 14:30:36.748466611 +0000 UTC m=+1138.399791023" watchObservedRunningTime="2026-01-27 14:30:36.7580653 +0000 UTC m=+1138.409389722" Jan 27 14:30:37 crc kubenswrapper[4833]: I0127 14:30:37.725743 4833 generic.go:334] "Generic (PLEG): container finished" podID="c71020a9-7b34-40b0-843b-3779371376b0" containerID="197b7af00e9036ec95d88852af1f82ac067bf915f7caf7330aa4ad1a7b84785e" exitCode=0 Jan 27 14:30:37 crc kubenswrapper[4833]: I0127 14:30:37.725861 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" event={"ID":"c71020a9-7b34-40b0-843b-3779371376b0","Type":"ContainerDied","Data":"197b7af00e9036ec95d88852af1f82ac067bf915f7caf7330aa4ad1a7b84785e"} Jan 27 14:30:37 crc kubenswrapper[4833]: I0127 14:30:37.727485 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea","Type":"ContainerStarted","Data":"ebad24caa0ef0da71ac712df5c8ba0c36956bda6947f1bbc61a6a1deb89786ee"} Jan 27 14:30:37 crc kubenswrapper[4833]: I0127 14:30:37.729044 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b143505-7ef8-4e88-b977-8fc8e3471474","Type":"ContainerStarted","Data":"8490ab1952cf3713fa98897b3b243166e18516a777630355c0518129f22ce82d"} Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.283553 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.410434 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-config\") pod \"c71020a9-7b34-40b0-843b-3779371376b0\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.410526 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6jhc\" (UniqueName: \"kubernetes.io/projected/c71020a9-7b34-40b0-843b-3779371376b0-kube-api-access-k6jhc\") pod \"c71020a9-7b34-40b0-843b-3779371376b0\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.410571 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-dns-svc\") pod \"c71020a9-7b34-40b0-843b-3779371376b0\" (UID: \"c71020a9-7b34-40b0-843b-3779371376b0\") " Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.425720 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71020a9-7b34-40b0-843b-3779371376b0-kube-api-access-k6jhc" (OuterVolumeSpecName: "kube-api-access-k6jhc") pod "c71020a9-7b34-40b0-843b-3779371376b0" (UID: "c71020a9-7b34-40b0-843b-3779371376b0"). InnerVolumeSpecName "kube-api-access-k6jhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.461109 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-config" (OuterVolumeSpecName: "config") pod "c71020a9-7b34-40b0-843b-3779371376b0" (UID: "c71020a9-7b34-40b0-843b-3779371376b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.462653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c71020a9-7b34-40b0-843b-3779371376b0" (UID: "c71020a9-7b34-40b0-843b-3779371376b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.512269 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.512313 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6jhc\" (UniqueName: \"kubernetes.io/projected/c71020a9-7b34-40b0-843b-3779371376b0-kube-api-access-k6jhc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.512329 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c71020a9-7b34-40b0-843b-3779371376b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.738855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"cf31f7d3-86f9-490a-806a-3944c7d60c10","Type":"ContainerStarted","Data":"0e524d6ec4500063a1051e366482387b89f64a3b144ae639a76c3fd589244bc7"} Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.738949 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.741649 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" event={"ID":"c71020a9-7b34-40b0-843b-3779371376b0","Type":"ContainerDied","Data":"248eafa721f5a13fb0ee42292e2a1530ca3dabe2184179fa5bc2525957eded76"} Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.741703 4833 scope.go:117] "RemoveContainer" containerID="197b7af00e9036ec95d88852af1f82ac067bf915f7caf7330aa4ad1a7b84785e" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.741768 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9p2vm" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.743637 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6584j" event={"ID":"71bea80e-a86d-40c6-b72f-9bab663cc6ea","Type":"ContainerStarted","Data":"812d6d77792bd849646fa45027fd02758f34fa125ae74aa15e9ac44551de9231"} Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.766849 4833 scope.go:117] "RemoveContainer" containerID="ef3dd1ea32fe20614c881a629acba03bd494153f62b89991deeb78eeb9c12f05" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.770609 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.179062419 podStartE2EDuration="22.770587025s" podCreationTimestamp="2026-01-27 14:30:16 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.363813137 +0000 UTC m=+1128.015137539" lastFinishedPulling="2026-01-27 14:30:34.955337743 +0000 UTC m=+1136.606662145" observedRunningTime="2026-01-27 14:30:38.764636453 +0000 UTC m=+1140.415960845" watchObservedRunningTime="2026-01-27 14:30:38.770587025 +0000 UTC m=+1140.421911427" Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.819566 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p2vm"] Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.837885 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9p2vm"] Jan 27 14:30:38 crc kubenswrapper[4833]: I0127 14:30:38.854086 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6584j" podStartSLOduration=8.447624608 podStartE2EDuration="17.854055598s" podCreationTimestamp="2026-01-27 14:30:21 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.325835811 +0000 UTC m=+1127.977160213" lastFinishedPulling="2026-01-27 14:30:35.732266811 +0000 UTC m=+1137.383591203" observedRunningTime="2026-01-27 14:30:38.851411105 +0000 UTC m=+1140.502735507" watchObservedRunningTime="2026-01-27 14:30:38.854055598 +0000 UTC m=+1140.505380010" Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.227154 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c71020a9-7b34-40b0-843b-3779371376b0" path="/var/lib/kubelet/pods/c71020a9-7b34-40b0-843b-3779371376b0/volumes" Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.753633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c797d35-1d15-4eed-a88c-20fd3aa64b91","Type":"ContainerStarted","Data":"18fcd676277049b551d4f589971fcbf48fc39844b54b4530d7bb36853f5eb5ba"} Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.758347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1","Type":"ContainerStarted","Data":"2542e4e42d38d6be7321bd3524ea2b692e819f47abdc019ea48eef2738f5dcc6"} Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.764556 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c","Type":"ContainerStarted","Data":"14cbd3b27c7c0edb0b4f0ba780094be14cee6d25e89b7632bbda2c113ccc4855"} Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.767690 4833 generic.go:334] "Generic (PLEG): container finished" podID="a0486055-d9f4-43dd-a25e-16549d574740" containerID="bbcd39c98861f1b218a15ed081d554049272378619182ed6e2cf52a029da39e5" exitCode=0 Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.767789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xbqq4" event={"ID":"a0486055-d9f4-43dd-a25e-16549d574740","Type":"ContainerDied","Data":"bbcd39c98861f1b218a15ed081d554049272378619182ed6e2cf52a029da39e5"} Jan 27 14:30:39 crc kubenswrapper[4833]: I0127 14:30:39.768559 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6584j" Jan 27 14:30:40 crc kubenswrapper[4833]: I0127 14:30:40.775219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xc7c" event={"ID":"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030","Type":"ContainerStarted","Data":"84c5e7f6f5d49a27fd3be6f9c5e530a1e78388693d77b35e76adc1e5af181955"} Jan 27 14:30:42 crc kubenswrapper[4833]: I0127 14:30:42.790865 4833 generic.go:334] "Generic (PLEG): container finished" podID="9c97e6a0-c4b8-4f8d-ac90-d28e93a48030" containerID="84c5e7f6f5d49a27fd3be6f9c5e530a1e78388693d77b35e76adc1e5af181955" exitCode=0 Jan 27 14:30:42 crc kubenswrapper[4833]: I0127 14:30:42.790953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xc7c" event={"ID":"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030","Type":"ContainerDied","Data":"84c5e7f6f5d49a27fd3be6f9c5e530a1e78388693d77b35e76adc1e5af181955"} Jan 27 14:30:43 crc kubenswrapper[4833]: I0127 14:30:43.801961 4833 generic.go:334] "Generic (PLEG): container finished" podID="a4a22294-2921-44ee-bdf0-41c631d2962c" containerID="06b5dd94eca329fcbc711247f854ab8d49acaf5623fd08a44f1388f2f08c8f2d" exitCode=0 Jan 27 14:30:43 crc kubenswrapper[4833]: I0127 14:30:43.802013 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4a22294-2921-44ee-bdf0-41c631d2962c","Type":"ContainerDied","Data":"06b5dd94eca329fcbc711247f854ab8d49acaf5623fd08a44f1388f2f08c8f2d"} Jan 27 14:30:45 crc kubenswrapper[4833]: I0127 14:30:45.823302 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" event={"ID":"040032d1-c03f-435a-9977-acdf3dde9911","Type":"ContainerStarted","Data":"fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.631645 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.836071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c","Type":"ContainerStarted","Data":"7654de153a9dd74a4e5ea6c3d4c6bcec3baa63f40fd59fc20047f56b69142246"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.838543 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"534b7701-45c8-403b-8601-3e22e9177c61","Type":"ContainerStarted","Data":"46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.838687 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.839964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerStarted","Data":"6cde151894058c02535de96fe019040449253d19f1811476686b454d175a3315"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.842124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xbqq4" event={"ID":"a0486055-d9f4-43dd-a25e-16549d574740","Type":"ContainerStarted","Data":"49d1d5cc642cb755cd933c9fb3ef20da52cdf686cbb39256b8931f7e8c755f8f"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.842246 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.843779 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1","Type":"ContainerStarted","Data":"bccd8fc75136c06bbdba2413f4e008c3b58cf2d1eb48e8c040836bb0a02bc03b"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.845192 4833 generic.go:334] "Generic (PLEG): container finished" podID="040032d1-c03f-435a-9977-acdf3dde9911" containerID="fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30" exitCode=0 Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.845253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" event={"ID":"040032d1-c03f-435a-9977-acdf3dde9911","Type":"ContainerDied","Data":"fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.847940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xc7c" event={"ID":"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030","Type":"ContainerStarted","Data":"5f007eb2ec81165fc19458838e0562a194a4014979352002377d104d531cbcd1"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.847976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9xc7c" event={"ID":"9c97e6a0-c4b8-4f8d-ac90-d28e93a48030","Type":"ContainerStarted","Data":"4375d40abd8f5cad3a9fd2294897943930ae979b018fcb13a1838853b8b9a523"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.848114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.849904 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-695lh" event={"ID":"a8938ddd-5e46-4314-af72-13a83905b6c4","Type":"ContainerStarted","Data":"075a158088555c3a0eb4cdead479348750d68134ae44acbb385c7951167c8939"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.860744 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a4a22294-2921-44ee-bdf0-41c631d2962c","Type":"ContainerStarted","Data":"4fcb21bac6fb83123c5168096f746d68fc4248f0177969b9e2fb204e3c2ae473"} Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.901035 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.915793918 podStartE2EDuration="22.901019653s" podCreationTimestamp="2026-01-27 14:30:24 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.68395937 +0000 UTC m=+1128.335283772" lastFinishedPulling="2026-01-27 14:30:35.669185105 +0000 UTC m=+1137.320509507" observedRunningTime="2026-01-27 14:30:46.869909071 +0000 UTC m=+1148.521233473" watchObservedRunningTime="2026-01-27 14:30:46.901019653 +0000 UTC m=+1148.552344045" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.901996 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9xc7c" podStartSLOduration=16.789564636 podStartE2EDuration="25.901990627s" podCreationTimestamp="2026-01-27 14:30:21 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.531684195 +0000 UTC m=+1128.183008597" lastFinishedPulling="2026-01-27 14:30:35.644110186 +0000 UTC m=+1137.295434588" observedRunningTime="2026-01-27 14:30:46.900817708 +0000 UTC m=+1148.552142120" watchObservedRunningTime="2026-01-27 14:30:46.901990627 +0000 UTC m=+1148.553315029" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.950060 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.07323621 podStartE2EDuration="26.950043143s" podCreationTimestamp="2026-01-27 14:30:20 +0000 UTC" firstStartedPulling="2026-01-27 14:30:27.928520922 +0000 UTC m=+1129.579845324" lastFinishedPulling="2026-01-27 14:30:35.805327855 +0000 UTC m=+1137.456652257" observedRunningTime="2026-01-27 14:30:46.944814178 +0000 UTC m=+1148.596138570" watchObservedRunningTime="2026-01-27 14:30:46.950043143 +0000 UTC m=+1148.601367545" Jan 27 14:30:46 crc kubenswrapper[4833]: I0127 14:30:46.980117 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-695lh" podStartSLOduration=16.890356517 podStartE2EDuration="20.980099212s" podCreationTimestamp="2026-01-27 14:30:26 +0000 UTC" firstStartedPulling="2026-01-27 14:30:34.454680051 +0000 UTC m=+1136.106004453" lastFinishedPulling="2026-01-27 14:30:38.544422746 +0000 UTC m=+1140.195747148" observedRunningTime="2026-01-27 14:30:46.970059611 +0000 UTC m=+1148.621384013" watchObservedRunningTime="2026-01-27 14:30:46.980099212 +0000 UTC m=+1148.631423614" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.007885 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.063875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.064070 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-xbqq4" podStartSLOduration=20.064050646 podStartE2EDuration="20.064050646s" podCreationTimestamp="2026-01-27 14:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:47.057019068 +0000 UTC m=+1148.708343490" watchObservedRunningTime="2026-01-27 14:30:47.064050646 +0000 UTC m=+1148.715375048" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.080365 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.489349361 podStartE2EDuration="33.080346664s" podCreationTimestamp="2026-01-27 14:30:14 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.177898389 +0000 UTC m=+1127.829222791" lastFinishedPulling="2026-01-27 14:30:34.768895692 +0000 UTC m=+1136.420220094" observedRunningTime="2026-01-27 14:30:47.075226883 +0000 UTC m=+1148.726551285" watchObservedRunningTime="2026-01-27 14:30:47.080346664 +0000 UTC m=+1148.731671066" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.097802 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.990274324 podStartE2EDuration="29.097783301s" podCreationTimestamp="2026-01-27 14:30:18 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.183301838 +0000 UTC m=+1127.834626230" lastFinishedPulling="2026-01-27 14:30:36.290810795 +0000 UTC m=+1137.942135207" observedRunningTime="2026-01-27 14:30:47.089922733 +0000 UTC m=+1148.741247135" watchObservedRunningTime="2026-01-27 14:30:47.097783301 +0000 UTC m=+1148.749107703" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.285743 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.878993 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" event={"ID":"040032d1-c03f-435a-9977-acdf3dde9911","Type":"ContainerStarted","Data":"9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b"} Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.880407 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.880437 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.915417 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" podStartSLOduration=20.91539849 podStartE2EDuration="20.91539849s" podCreationTimestamp="2026-01-27 14:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:47.908399453 +0000 UTC m=+1149.559723875" watchObservedRunningTime="2026-01-27 14:30:47.91539849 +0000 UTC m=+1149.566722912" Jan 27 14:30:47 crc kubenswrapper[4833]: I0127 14:30:47.925312 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.545000 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-skt7p"] Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.574537 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pj4ff"] Jan 27 14:30:48 crc kubenswrapper[4833]: E0127 14:30:48.574871 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71020a9-7b34-40b0-843b-3779371376b0" containerName="dnsmasq-dns" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.574884 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71020a9-7b34-40b0-843b-3779371376b0" containerName="dnsmasq-dns" Jan 27 14:30:48 crc kubenswrapper[4833]: E0127 14:30:48.574900 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="dnsmasq-dns" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.574906 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="dnsmasq-dns" Jan 27 14:30:48 crc kubenswrapper[4833]: E0127 14:30:48.574929 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71020a9-7b34-40b0-843b-3779371376b0" containerName="init" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.574936 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71020a9-7b34-40b0-843b-3779371376b0" containerName="init" Jan 27 14:30:48 crc kubenswrapper[4833]: E0127 14:30:48.574946 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="init" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.574952 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="init" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.575106 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71020a9-7b34-40b0-843b-3779371376b0" containerName="dnsmasq-dns" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.575120 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9b4bbe-877b-49f5-b63a-f3a4b10553be" containerName="dnsmasq-dns" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.575915 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.598863 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pj4ff"] Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.687489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-config\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.687532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.687741 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.687962 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmr5j\" (UniqueName: \"kubernetes.io/projected/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-kube-api-access-gmr5j\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.687990 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.789351 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.789496 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmr5j\" (UniqueName: \"kubernetes.io/projected/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-kube-api-access-gmr5j\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.789526 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.789572 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-config\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.789596 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.790286 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.790377 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-config\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.790434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.790837 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.824346 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmr5j\" (UniqueName: \"kubernetes.io/projected/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-kube-api-access-gmr5j\") pod \"dnsmasq-dns-b8fbc5445-pj4ff\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.886050 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:48 crc kubenswrapper[4833]: I0127 14:30:48.892399 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.286556 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.370275 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pj4ff"] Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.374394 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:49 crc kubenswrapper[4833]: W0127 14:30:49.376632 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c6ecc5_6ef8_4032_9e62_8f0ad1ff86f2.slice/crio-6f8de7ab18d91a26e3c28dbef40d408370e6ecad453485d6f44d686e6ff582ab WatchSource:0}: Error finding container 6f8de7ab18d91a26e3c28dbef40d408370e6ecad453485d6f44d686e6ff582ab: Status 404 returned error can't find the container with id 6f8de7ab18d91a26e3c28dbef40d408370e6ecad453485d6f44d686e6ff582ab Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.660504 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.666931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.670426 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hhmql" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.670850 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.670912 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.671060 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.685038 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.821886 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.821942 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.821997 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-589sj\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-kube-api-access-589sj\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.822091 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df43a2ef-c36c-4b08-bee6-6820e443220c-lock\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.822122 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df43a2ef-c36c-4b08-bee6-6820e443220c-cache\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.822174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df43a2ef-c36c-4b08-bee6-6820e443220c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.894710 4833 generic.go:334] "Generic (PLEG): container finished" podID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerID="8395c71513ab3346d0e93a2ecb1f581533d856fcc28e88e7caf2c93d0d8de72f" exitCode=0 Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.894912 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" event={"ID":"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2","Type":"ContainerDied","Data":"8395c71513ab3346d0e93a2ecb1f581533d856fcc28e88e7caf2c93d0d8de72f"} Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.895175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" event={"ID":"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2","Type":"ContainerStarted","Data":"6f8de7ab18d91a26e3c28dbef40d408370e6ecad453485d6f44d686e6ff582ab"} Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.897467 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" podUID="040032d1-c03f-435a-9977-acdf3dde9911" containerName="dnsmasq-dns" containerID="cri-o://9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b" gracePeriod=10 Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.927244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589sj\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-kube-api-access-589sj\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.927366 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df43a2ef-c36c-4b08-bee6-6820e443220c-lock\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.927393 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df43a2ef-c36c-4b08-bee6-6820e443220c-cache\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.927457 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df43a2ef-c36c-4b08-bee6-6820e443220c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.927500 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.927524 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: E0127 14:30:49.927692 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:49 crc kubenswrapper[4833]: E0127 14:30:49.927730 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:49 crc kubenswrapper[4833]: E0127 14:30:49.927778 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift podName:df43a2ef-c36c-4b08-bee6-6820e443220c nodeName:}" failed. No retries permitted until 2026-01-27 14:30:50.427760391 +0000 UTC m=+1152.079084803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift") pod "swift-storage-0" (UID: "df43a2ef-c36c-4b08-bee6-6820e443220c") : configmap "swift-ring-files" not found Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.928737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df43a2ef-c36c-4b08-bee6-6820e443220c-lock\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.928986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df43a2ef-c36c-4b08-bee6-6820e443220c-cache\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.929131 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.936716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df43a2ef-c36c-4b08-bee6-6820e443220c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.941616 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.947664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589sj\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-kube-api-access-589sj\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4833]: I0127 14:30:49.977746 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.100518 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.102018 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.105268 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-v2pgq" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.105576 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.105712 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.105593 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.110980 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.172649 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7h76q"] Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.173761 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.176107 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.176138 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.176251 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.195178 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7h76q"] Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-dispersionconf\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a76372e9-2f46-4b52-9c11-842331d4357f-scripts\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242225 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76372e9-2f46-4b52-9c11-842331d4357f-config\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242286 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-ring-data-devices\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242332 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-swiftconf\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242356 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-combined-ca-bundle\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242423 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bmk\" (UniqueName: \"kubernetes.io/projected/0d494231-bc5f-4d50-b859-2827ecfe7fdb-kube-api-access-n5bmk\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242561 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-scripts\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a76372e9-2f46-4b52-9c11-842331d4357f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zkc4\" (UniqueName: \"kubernetes.io/projected/a76372e9-2f46-4b52-9c11-842331d4357f-kube-api-access-7zkc4\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242634 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d494231-bc5f-4d50-b859-2827ecfe7fdb-etc-swift\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242665 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.242695 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.244008 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4cbg4"] Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.245326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.254424 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7h76q"] Jan 27 14:30:50 crc kubenswrapper[4833]: E0127 14:30:50.255429 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-n5bmk ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-7h76q" podUID="0d494231-bc5f-4d50-b859-2827ecfe7fdb" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.265012 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4cbg4"] Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.344672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345034 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-scripts\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-ring-data-devices\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345140 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/11a66058-c0bb-4357-a752-8823939d7ee3-kube-api-access-qhx84\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345184 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-dispersionconf\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345202 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11a66058-c0bb-4357-a752-8823939d7ee3-etc-swift\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345241 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a76372e9-2f46-4b52-9c11-842331d4357f-scripts\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345288 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76372e9-2f46-4b52-9c11-842331d4357f-config\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-combined-ca-bundle\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.345701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-ring-data-devices\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a76372e9-2f46-4b52-9c11-842331d4357f-config\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-swiftconf\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-combined-ca-bundle\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346466 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5bmk\" (UniqueName: \"kubernetes.io/projected/0d494231-bc5f-4d50-b859-2827ecfe7fdb-kube-api-access-n5bmk\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346493 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-scripts\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346513 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a76372e9-2f46-4b52-9c11-842331d4357f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346543 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-swiftconf\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zkc4\" (UniqueName: \"kubernetes.io/projected/a76372e9-2f46-4b52-9c11-842331d4357f-kube-api-access-7zkc4\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d494231-bc5f-4d50-b859-2827ecfe7fdb-etc-swift\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346621 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-dispersionconf\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.346881 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a76372e9-2f46-4b52-9c11-842331d4357f-scripts\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.347331 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-ring-data-devices\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.350257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.350664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-dispersionconf\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.350981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a76372e9-2f46-4b52-9c11-842331d4357f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.351039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-swiftconf\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.352272 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d494231-bc5f-4d50-b859-2827ecfe7fdb-etc-swift\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.352668 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.352986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-scripts\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.355674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76372e9-2f46-4b52-9c11-842331d4357f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.358214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-combined-ca-bundle\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.379111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5bmk\" (UniqueName: \"kubernetes.io/projected/0d494231-bc5f-4d50-b859-2827ecfe7fdb-kube-api-access-n5bmk\") pod \"swift-ring-rebalance-7h76q\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.381052 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zkc4\" (UniqueName: \"kubernetes.io/projected/a76372e9-2f46-4b52-9c11-842331d4357f-kube-api-access-7zkc4\") pod \"ovn-northd-0\" (UID: \"a76372e9-2f46-4b52-9c11-842331d4357f\") " pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-combined-ca-bundle\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448508 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448537 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-swiftconf\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448555 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-dispersionconf\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448574 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-scripts\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-ring-data-devices\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448622 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/11a66058-c0bb-4357-a752-8823939d7ee3-kube-api-access-qhx84\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.448663 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11a66058-c0bb-4357-a752-8823939d7ee3-etc-swift\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: E0127 14:30:50.448685 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:50 crc kubenswrapper[4833]: E0127 14:30:50.448715 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:50 crc kubenswrapper[4833]: E0127 14:30:50.448767 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift podName:df43a2ef-c36c-4b08-bee6-6820e443220c nodeName:}" failed. No retries permitted until 2026-01-27 14:30:51.448748158 +0000 UTC m=+1153.100072560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift") pod "swift-storage-0" (UID: "df43a2ef-c36c-4b08-bee6-6820e443220c") : configmap "swift-ring-files" not found Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.449005 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11a66058-c0bb-4357-a752-8823939d7ee3-etc-swift\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.449420 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-scripts\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.449698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-ring-data-devices\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.451896 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-dispersionconf\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.452136 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-combined-ca-bundle\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.452403 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-swiftconf\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.470962 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.474126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/11a66058-c0bb-4357-a752-8823939d7ee3-kube-api-access-qhx84\") pod \"swift-ring-rebalance-4cbg4\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.475213 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.549590 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-ovsdbserver-sb\") pod \"040032d1-c03f-435a-9977-acdf3dde9911\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.549719 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k849\" (UniqueName: \"kubernetes.io/projected/040032d1-c03f-435a-9977-acdf3dde9911-kube-api-access-9k849\") pod \"040032d1-c03f-435a-9977-acdf3dde9911\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.549753 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-config\") pod \"040032d1-c03f-435a-9977-acdf3dde9911\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.549845 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-dns-svc\") pod \"040032d1-c03f-435a-9977-acdf3dde9911\" (UID: \"040032d1-c03f-435a-9977-acdf3dde9911\") " Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.553874 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040032d1-c03f-435a-9977-acdf3dde9911-kube-api-access-9k849" (OuterVolumeSpecName: "kube-api-access-9k849") pod "040032d1-c03f-435a-9977-acdf3dde9911" (UID: "040032d1-c03f-435a-9977-acdf3dde9911"). InnerVolumeSpecName "kube-api-access-9k849". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.560543 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.624703 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "040032d1-c03f-435a-9977-acdf3dde9911" (UID: "040032d1-c03f-435a-9977-acdf3dde9911"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.624787 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-config" (OuterVolumeSpecName: "config") pod "040032d1-c03f-435a-9977-acdf3dde9911" (UID: "040032d1-c03f-435a-9977-acdf3dde9911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.635683 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "040032d1-c03f-435a-9977-acdf3dde9911" (UID: "040032d1-c03f-435a-9977-acdf3dde9911"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.657353 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.657390 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k849\" (UniqueName: \"kubernetes.io/projected/040032d1-c03f-435a-9977-acdf3dde9911-kube-api-access-9k849\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.657401 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.657410 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/040032d1-c03f-435a-9977-acdf3dde9911-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.914958 4833 generic.go:334] "Generic (PLEG): container finished" podID="040032d1-c03f-435a-9977-acdf3dde9911" containerID="9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b" exitCode=0 Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.915027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" event={"ID":"040032d1-c03f-435a-9977-acdf3dde9911","Type":"ContainerDied","Data":"9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b"} Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.915064 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" event={"ID":"040032d1-c03f-435a-9977-acdf3dde9911","Type":"ContainerDied","Data":"4d867dcab81a204536808e0fde9088b5530d6cc134802e992d8c4ba994c6562e"} Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.915084 4833 scope.go:117] "RemoveContainer" containerID="9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.915010 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-skt7p" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.918700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" event={"ID":"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2","Type":"ContainerStarted","Data":"bf8cc554b13117fc6146d9266ac1ff8fa909587717b0c5e0ec0ebd03c2405ee6"} Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.920126 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.924631 4833 generic.go:334] "Generic (PLEG): container finished" podID="4c797d35-1d15-4eed-a88c-20fd3aa64b91" containerID="18fcd676277049b551d4f589971fcbf48fc39844b54b4530d7bb36853f5eb5ba" exitCode=0 Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.924736 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c797d35-1d15-4eed-a88c-20fd3aa64b91","Type":"ContainerDied","Data":"18fcd676277049b551d4f589971fcbf48fc39844b54b4530d7bb36853f5eb5ba"} Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.924929 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:50 crc kubenswrapper[4833]: I0127 14:30:50.943539 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podStartSLOduration=2.94351602 podStartE2EDuration="2.94351602s" podCreationTimestamp="2026-01-27 14:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:50.938702645 +0000 UTC m=+1152.590027047" watchObservedRunningTime="2026-01-27 14:30:50.94351602 +0000 UTC m=+1152.594840422" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.022177 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.023185 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.047248 4833 scope.go:117] "RemoveContainer" containerID="fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063476 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5bmk\" (UniqueName: \"kubernetes.io/projected/0d494231-bc5f-4d50-b859-2827ecfe7fdb-kube-api-access-n5bmk\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063642 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-combined-ca-bundle\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063675 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-scripts\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-swiftconf\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-dispersionconf\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063806 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d494231-bc5f-4d50-b859-2827ecfe7fdb-etc-swift\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.063856 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-ring-data-devices\") pod \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\" (UID: \"0d494231-bc5f-4d50-b859-2827ecfe7fdb\") " Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.064427 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-scripts" (OuterVolumeSpecName: "scripts") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.070757 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d494231-bc5f-4d50-b859-2827ecfe7fdb-kube-api-access-n5bmk" (OuterVolumeSpecName: "kube-api-access-n5bmk") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "kube-api-access-n5bmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.072566 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.073131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.073205 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d494231-bc5f-4d50-b859-2827ecfe7fdb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.075543 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.079234 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0d494231-bc5f-4d50-b859-2827ecfe7fdb" (UID: "0d494231-bc5f-4d50-b859-2827ecfe7fdb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.099144 4833 scope.go:117] "RemoveContainer" containerID="9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b" Jan 27 14:30:51 crc kubenswrapper[4833]: E0127 14:30:51.101826 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b\": container with ID starting with 9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b not found: ID does not exist" containerID="9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.101870 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b"} err="failed to get container status \"9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b\": rpc error: code = NotFound desc = could not find container \"9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b\": container with ID starting with 9cb71be7a5e5c36683177c17c893cf729ba4f787eb9ff75f6e70db285767f07b not found: ID does not exist" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.101900 4833 scope.go:117] "RemoveContainer" containerID="fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30" Jan 27 14:30:51 crc kubenswrapper[4833]: E0127 14:30:51.103646 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30\": container with ID starting with fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30 not found: ID does not exist" containerID="fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.103675 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30"} err="failed to get container status \"fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30\": rpc error: code = NotFound desc = could not find container \"fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30\": container with ID starting with fd63f80480839e8607f3f4ed36ed28798cdb8c57a9ade8d87d149cb352c69c30 not found: ID does not exist" Jan 27 14:30:51 crc kubenswrapper[4833]: W0127 14:30:51.112092 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11a66058_c0bb_4357_a752_8823939d7ee3.slice/crio-36acf3e425b5830bf86dfa2c51bb90231c513f2d3ceaffe85c012b6421015769 WatchSource:0}: Error finding container 36acf3e425b5830bf86dfa2c51bb90231c513f2d3ceaffe85c012b6421015769: Status 404 returned error can't find the container with id 36acf3e425b5830bf86dfa2c51bb90231c513f2d3ceaffe85c012b6421015769 Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.116275 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-4cbg4"] Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.122083 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-skt7p"] Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.136983 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-skt7p"] Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166019 4833 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166045 4833 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166056 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0d494231-bc5f-4d50-b859-2827ecfe7fdb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166066 4833 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166076 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5bmk\" (UniqueName: \"kubernetes.io/projected/0d494231-bc5f-4d50-b859-2827ecfe7fdb-kube-api-access-n5bmk\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166084 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d494231-bc5f-4d50-b859-2827ecfe7fdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.166092 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d494231-bc5f-4d50-b859-2827ecfe7fdb-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.220614 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040032d1-c03f-435a-9977-acdf3dde9911" path="/var/lib/kubelet/pods/040032d1-c03f-435a-9977-acdf3dde9911/volumes" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.471847 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:51 crc kubenswrapper[4833]: E0127 14:30:51.472405 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:51 crc kubenswrapper[4833]: E0127 14:30:51.472427 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:51 crc kubenswrapper[4833]: E0127 14:30:51.472662 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift podName:df43a2ef-c36c-4b08-bee6-6820e443220c nodeName:}" failed. No retries permitted until 2026-01-27 14:30:53.472642482 +0000 UTC m=+1155.123966884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift") pod "swift-storage-0" (UID: "df43a2ef-c36c-4b08-bee6-6820e443220c") : configmap "swift-ring-files" not found Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.932149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4cbg4" event={"ID":"11a66058-c0bb-4357-a752-8823939d7ee3","Type":"ContainerStarted","Data":"36acf3e425b5830bf86dfa2c51bb90231c513f2d3ceaffe85c012b6421015769"} Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.934341 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a76372e9-2f46-4b52-9c11-842331d4357f","Type":"ContainerStarted","Data":"00887893fc8c187326a9bb71298b17395516f9f1b20b2d6de733c6c58972ee8f"} Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.935837 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c797d35-1d15-4eed-a88c-20fd3aa64b91","Type":"ContainerStarted","Data":"424d2654cd172398b014f40fc470e5ffc079cbbdb662cbea083bc96560729718"} Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.935932 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7h76q" Jan 27 14:30:51 crc kubenswrapper[4833]: I0127 14:30:51.972402 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.280924337 podStartE2EDuration="38.972378482s" podCreationTimestamp="2026-01-27 14:30:13 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.114349842 +0000 UTC m=+1127.765674244" lastFinishedPulling="2026-01-27 14:30:35.805803977 +0000 UTC m=+1137.457128389" observedRunningTime="2026-01-27 14:30:51.965493548 +0000 UTC m=+1153.616817960" watchObservedRunningTime="2026-01-27 14:30:51.972378482 +0000 UTC m=+1153.623702894" Jan 27 14:30:52 crc kubenswrapper[4833]: I0127 14:30:52.010159 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7h76q"] Jan 27 14:30:52 crc kubenswrapper[4833]: I0127 14:30:52.056276 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-7h76q"] Jan 27 14:30:52 crc kubenswrapper[4833]: I0127 14:30:52.953358 4833 generic.go:334] "Generic (PLEG): container finished" podID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerID="6cde151894058c02535de96fe019040449253d19f1811476686b454d175a3315" exitCode=0 Jan 27 14:30:52 crc kubenswrapper[4833]: I0127 14:30:52.953544 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerDied","Data":"6cde151894058c02535de96fe019040449253d19f1811476686b454d175a3315"} Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.003638 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.223102 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d494231-bc5f-4d50-b859-2827ecfe7fdb" path="/var/lib/kubelet/pods/0d494231-bc5f-4d50-b859-2827ecfe7fdb/volumes" Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.508804 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:53 crc kubenswrapper[4833]: E0127 14:30:53.509057 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:53 crc kubenswrapper[4833]: E0127 14:30:53.509081 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:53 crc kubenswrapper[4833]: E0127 14:30:53.509132 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift podName:df43a2ef-c36c-4b08-bee6-6820e443220c nodeName:}" failed. No retries permitted until 2026-01-27 14:30:57.509115649 +0000 UTC m=+1159.160440061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift") pod "swift-storage-0" (UID: "df43a2ef-c36c-4b08-bee6-6820e443220c") : configmap "swift-ring-files" not found Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.964833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a76372e9-2f46-4b52-9c11-842331d4357f","Type":"ContainerStarted","Data":"499f78b2e77d8ba30368faee6c758b65884e558428a250357acd6c5e0125c0f8"} Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.964883 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a76372e9-2f46-4b52-9c11-842331d4357f","Type":"ContainerStarted","Data":"0eec2b456fc140051d2f86e25219da001ed843fb87c2f6e337e3a8089533027f"} Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.966001 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 14:30:53 crc kubenswrapper[4833]: I0127 14:30:53.991153 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.268377059 podStartE2EDuration="3.991134636s" podCreationTimestamp="2026-01-27 14:30:50 +0000 UTC" firstStartedPulling="2026-01-27 14:30:51.03105666 +0000 UTC m=+1152.682381082" lastFinishedPulling="2026-01-27 14:30:52.753814257 +0000 UTC m=+1154.405138659" observedRunningTime="2026-01-27 14:30:53.982934 +0000 UTC m=+1155.634258412" watchObservedRunningTime="2026-01-27 14:30:53.991134636 +0000 UTC m=+1155.642459038" Jan 27 14:30:55 crc kubenswrapper[4833]: I0127 14:30:55.164882 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 14:30:55 crc kubenswrapper[4833]: I0127 14:30:55.165098 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.314886 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.315431 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.400315 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.415005 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.501750 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.880510 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-t27br"] Jan 27 14:30:56 crc kubenswrapper[4833]: E0127 14:30:56.880972 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040032d1-c03f-435a-9977-acdf3dde9911" containerName="dnsmasq-dns" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.880993 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="040032d1-c03f-435a-9977-acdf3dde9911" containerName="dnsmasq-dns" Jan 27 14:30:56 crc kubenswrapper[4833]: E0127 14:30:56.881022 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040032d1-c03f-435a-9977-acdf3dde9911" containerName="init" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.881030 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="040032d1-c03f-435a-9977-acdf3dde9911" containerName="init" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.881216 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="040032d1-c03f-435a-9977-acdf3dde9911" containerName="dnsmasq-dns" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.882007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t27br" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.892349 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t27br"] Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.978514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxjwt\" (UniqueName: \"kubernetes.io/projected/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-kube-api-access-zxjwt\") pod \"glance-db-create-t27br\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " pod="openstack/glance-db-create-t27br" Jan 27 14:30:56 crc kubenswrapper[4833]: I0127 14:30:56.978621 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-operator-scripts\") pod \"glance-db-create-t27br\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " pod="openstack/glance-db-create-t27br" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.008976 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d9e5-account-create-update-s49bs"] Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.021008 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4cbg4" event={"ID":"11a66058-c0bb-4357-a752-8823939d7ee3","Type":"ContainerStarted","Data":"da30d6c82ce0b7e512dd13b4c02fba86077f7735700a18c1da6573968f4974aa"} Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.021120 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.028237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.050461 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d9e5-account-create-update-s49bs"] Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.053942 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-4cbg4" podStartSLOduration=1.914257844 podStartE2EDuration="7.053933154s" podCreationTimestamp="2026-01-27 14:30:50 +0000 UTC" firstStartedPulling="2026-01-27 14:30:51.115749571 +0000 UTC m=+1152.767073973" lastFinishedPulling="2026-01-27 14:30:56.255424891 +0000 UTC m=+1157.906749283" observedRunningTime="2026-01-27 14:30:57.042676295 +0000 UTC m=+1158.694000697" watchObservedRunningTime="2026-01-27 14:30:57.053933154 +0000 UTC m=+1158.705257556" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.079864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxjwt\" (UniqueName: \"kubernetes.io/projected/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-kube-api-access-zxjwt\") pod \"glance-db-create-t27br\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " pod="openstack/glance-db-create-t27br" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.079939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-operator-scripts\") pod \"glance-db-create-t27br\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " pod="openstack/glance-db-create-t27br" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.080666 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-operator-scripts\") pod \"glance-db-create-t27br\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " pod="openstack/glance-db-create-t27br" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.099635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxjwt\" (UniqueName: \"kubernetes.io/projected/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-kube-api-access-zxjwt\") pod \"glance-db-create-t27br\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " pod="openstack/glance-db-create-t27br" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.110425 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.181292 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d869cb-334d-4d2a-917c-25fe86c0610b-operator-scripts\") pod \"glance-d9e5-account-create-update-s49bs\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.181411 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzt6t\" (UniqueName: \"kubernetes.io/projected/03d869cb-334d-4d2a-917c-25fe86c0610b-kube-api-access-bzt6t\") pod \"glance-d9e5-account-create-update-s49bs\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.205258 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t27br" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.283197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d869cb-334d-4d2a-917c-25fe86c0610b-operator-scripts\") pod \"glance-d9e5-account-create-update-s49bs\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.283306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzt6t\" (UniqueName: \"kubernetes.io/projected/03d869cb-334d-4d2a-917c-25fe86c0610b-kube-api-access-bzt6t\") pod \"glance-d9e5-account-create-update-s49bs\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.284102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d869cb-334d-4d2a-917c-25fe86c0610b-operator-scripts\") pod \"glance-d9e5-account-create-update-s49bs\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.323663 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzt6t\" (UniqueName: \"kubernetes.io/projected/03d869cb-334d-4d2a-917c-25fe86c0610b-kube-api-access-bzt6t\") pod \"glance-d9e5-account-create-update-s49bs\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.355331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.593876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:30:57 crc kubenswrapper[4833]: E0127 14:30:57.594321 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:57 crc kubenswrapper[4833]: E0127 14:30:57.594335 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:57 crc kubenswrapper[4833]: E0127 14:30:57.594377 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift podName:df43a2ef-c36c-4b08-bee6-6820e443220c nodeName:}" failed. No retries permitted until 2026-01-27 14:31:05.594362946 +0000 UTC m=+1167.245687348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift") pod "swift-storage-0" (UID: "df43a2ef-c36c-4b08-bee6-6820e443220c") : configmap "swift-ring-files" not found Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.773864 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t27br"] Jan 27 14:30:57 crc kubenswrapper[4833]: W0127 14:30:57.778375 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd00b36b_7dd8_44b6_82f2_6b3dd9dbd0fb.slice/crio-4b3f1f80d026d8c59ac0da8d3dc0371d71b3c4e80c864eda763e24c461ea2fe8 WatchSource:0}: Error finding container 4b3f1f80d026d8c59ac0da8d3dc0371d71b3c4e80c864eda763e24c461ea2fe8: Status 404 returned error can't find the container with id 4b3f1f80d026d8c59ac0da8d3dc0371d71b3c4e80c864eda763e24c461ea2fe8 Jan 27 14:30:57 crc kubenswrapper[4833]: I0127 14:30:57.922945 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d9e5-account-create-update-s49bs"] Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.023014 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t27br" event={"ID":"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb","Type":"ContainerStarted","Data":"4b3f1f80d026d8c59ac0da8d3dc0371d71b3c4e80c864eda763e24c461ea2fe8"} Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.442154 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-z9mm9"] Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.443179 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.473030 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-z9mm9"] Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.511557 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.607503 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-402d-account-create-update-8pv4k"] Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.612052 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.614880 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.616018 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-402d-account-create-update-8pv4k"] Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.626583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bc9850-2467-4ba0-a2bd-d901ad222ea1-operator-scripts\") pod \"watcher-db-create-z9mm9\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.626665 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzq2s\" (UniqueName: \"kubernetes.io/projected/19bc9850-2467-4ba0-a2bd-d901ad222ea1-kube-api-access-gzq2s\") pod \"watcher-db-create-z9mm9\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.728379 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdds\" (UniqueName: \"kubernetes.io/projected/11048d75-f33a-45c7-867a-a1ba4eb48e52-kube-api-access-grdds\") pod \"watcher-402d-account-create-update-8pv4k\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.728435 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11048d75-f33a-45c7-867a-a1ba4eb48e52-operator-scripts\") pod \"watcher-402d-account-create-update-8pv4k\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.728575 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bc9850-2467-4ba0-a2bd-d901ad222ea1-operator-scripts\") pod \"watcher-db-create-z9mm9\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.728623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzq2s\" (UniqueName: \"kubernetes.io/projected/19bc9850-2467-4ba0-a2bd-d901ad222ea1-kube-api-access-gzq2s\") pod \"watcher-db-create-z9mm9\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.729352 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bc9850-2467-4ba0-a2bd-d901ad222ea1-operator-scripts\") pod \"watcher-db-create-z9mm9\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.748972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzq2s\" (UniqueName: \"kubernetes.io/projected/19bc9850-2467-4ba0-a2bd-d901ad222ea1-kube-api-access-gzq2s\") pod \"watcher-db-create-z9mm9\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.760897 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-z9mm9" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.829902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grdds\" (UniqueName: \"kubernetes.io/projected/11048d75-f33a-45c7-867a-a1ba4eb48e52-kube-api-access-grdds\") pod \"watcher-402d-account-create-update-8pv4k\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.829955 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11048d75-f33a-45c7-867a-a1ba4eb48e52-operator-scripts\") pod \"watcher-402d-account-create-update-8pv4k\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.830686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11048d75-f33a-45c7-867a-a1ba4eb48e52-operator-scripts\") pod \"watcher-402d-account-create-update-8pv4k\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.847218 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grdds\" (UniqueName: \"kubernetes.io/projected/11048d75-f33a-45c7-867a-a1ba4eb48e52-kube-api-access-grdds\") pod \"watcher-402d-account-create-update-8pv4k\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.894625 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.927009 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.974502 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xbqq4"] Jan 27 14:30:58 crc kubenswrapper[4833]: I0127 14:30:58.974726 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-xbqq4" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="dnsmasq-dns" containerID="cri-o://49d1d5cc642cb755cd933c9fb3ef20da52cdf686cbb39256b8931f7e8c755f8f" gracePeriod=10 Jan 27 14:31:00 crc kubenswrapper[4833]: I0127 14:31:00.040861 4833 generic.go:334] "Generic (PLEG): container finished" podID="a0486055-d9f4-43dd-a25e-16549d574740" containerID="49d1d5cc642cb755cd933c9fb3ef20da52cdf686cbb39256b8931f7e8c755f8f" exitCode=0 Jan 27 14:31:00 crc kubenswrapper[4833]: I0127 14:31:00.040925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xbqq4" event={"ID":"a0486055-d9f4-43dd-a25e-16549d574740","Type":"ContainerDied","Data":"49d1d5cc642cb755cd933c9fb3ef20da52cdf686cbb39256b8931f7e8c755f8f"} Jan 27 14:31:00 crc kubenswrapper[4833]: I0127 14:31:00.043214 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d9e5-account-create-update-s49bs" event={"ID":"03d869cb-334d-4d2a-917c-25fe86c0610b","Type":"ContainerStarted","Data":"aec9766ce538eddf780c0c12cc697a38dde2e62e0fa3474f81aed8082d32b3b2"} Jan 27 14:31:00 crc kubenswrapper[4833]: W0127 14:31:00.156384 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11048d75_f33a_45c7_867a_a1ba4eb48e52.slice/crio-4d682fc6584f42719ac2c5eec50c4bd50ad35d116a7e2a3b2a00bfdc8c6f005e WatchSource:0}: Error finding container 4d682fc6584f42719ac2c5eec50c4bd50ad35d116a7e2a3b2a00bfdc8c6f005e: Status 404 returned error can't find the container with id 4d682fc6584f42719ac2c5eec50c4bd50ad35d116a7e2a3b2a00bfdc8c6f005e Jan 27 14:31:00 crc kubenswrapper[4833]: I0127 14:31:00.164222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-402d-account-create-update-8pv4k"] Jan 27 14:31:00 crc kubenswrapper[4833]: I0127 14:31:00.213759 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-z9mm9"] Jan 27 14:31:00 crc kubenswrapper[4833]: W0127 14:31:00.222430 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19bc9850_2467_4ba0_a2bd_d901ad222ea1.slice/crio-8a419430c14329bcbe60439b5276e919f8c09528e1751dc4792f3ceb7fec7db4 WatchSource:0}: Error finding container 8a419430c14329bcbe60439b5276e919f8c09528e1751dc4792f3ceb7fec7db4: Status 404 returned error can't find the container with id 8a419430c14329bcbe60439b5276e919f8c09528e1751dc4792f3ceb7fec7db4 Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.055028 4833 generic.go:334] "Generic (PLEG): container finished" podID="03d869cb-334d-4d2a-917c-25fe86c0610b" containerID="e06a27e5305b638e65449979d92fcfe5c1917c3eeb49841d00c6553c1e152fa3" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.055111 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d9e5-account-create-update-s49bs" event={"ID":"03d869cb-334d-4d2a-917c-25fe86c0610b","Type":"ContainerDied","Data":"e06a27e5305b638e65449979d92fcfe5c1917c3eeb49841d00c6553c1e152fa3"} Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.058305 4833 generic.go:334] "Generic (PLEG): container finished" podID="19bc9850-2467-4ba0-a2bd-d901ad222ea1" containerID="8b051c4c1520a89e27e2ef667fbe83134f48bb3774dd8081fa80a9a3986c211f" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.058386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-z9mm9" event={"ID":"19bc9850-2467-4ba0-a2bd-d901ad222ea1","Type":"ContainerDied","Data":"8b051c4c1520a89e27e2ef667fbe83134f48bb3774dd8081fa80a9a3986c211f"} Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.058421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-z9mm9" event={"ID":"19bc9850-2467-4ba0-a2bd-d901ad222ea1","Type":"ContainerStarted","Data":"8a419430c14329bcbe60439b5276e919f8c09528e1751dc4792f3ceb7fec7db4"} Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.061052 4833 generic.go:334] "Generic (PLEG): container finished" podID="11048d75-f33a-45c7-867a-a1ba4eb48e52" containerID="c6cc17b5a899cb7e366350072839fd07d637bbff432f112a05dbd95618fdd6df" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.061124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-402d-account-create-update-8pv4k" event={"ID":"11048d75-f33a-45c7-867a-a1ba4eb48e52","Type":"ContainerDied","Data":"c6cc17b5a899cb7e366350072839fd07d637bbff432f112a05dbd95618fdd6df"} Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.061149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-402d-account-create-update-8pv4k" event={"ID":"11048d75-f33a-45c7-867a-a1ba4eb48e52","Type":"ContainerStarted","Data":"4d682fc6584f42719ac2c5eec50c4bd50ad35d116a7e2a3b2a00bfdc8c6f005e"} Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.073510 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" containerID="551f9bd745798fe95f351bf4bfcf28007f82fec9bfe4c3405c7ea0592908e1d0" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4833]: I0127 14:31:01.073560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t27br" event={"ID":"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb","Type":"ContainerDied","Data":"551f9bd745798fe95f351bf4bfcf28007f82fec9bfe4c3405c7ea0592908e1d0"} Jan 27 14:31:02 crc kubenswrapper[4833]: I0127 14:31:02.261217 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:31:02 crc kubenswrapper[4833]: I0127 14:31:02.261282 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.493134 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qtxmt"] Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.494408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.497584 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.506792 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qtxmt"] Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.651167 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdwf\" (UniqueName: \"kubernetes.io/projected/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-kube-api-access-kwdwf\") pod \"root-account-create-update-qtxmt\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.651218 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-operator-scripts\") pod \"root-account-create-update-qtxmt\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.752583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwdwf\" (UniqueName: \"kubernetes.io/projected/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-kube-api-access-kwdwf\") pod \"root-account-create-update-qtxmt\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.752663 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-operator-scripts\") pod \"root-account-create-update-qtxmt\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.754010 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-operator-scripts\") pod \"root-account-create-update-qtxmt\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.798223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwdwf\" (UniqueName: \"kubernetes.io/projected/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-kube-api-access-kwdwf\") pod \"root-account-create-update-qtxmt\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.850332 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.858802 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t27br" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.870984 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.883661 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.895847 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:31:03 crc kubenswrapper[4833]: I0127 14:31:03.922965 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-z9mm9" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.059852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bc9850-2467-4ba0-a2bd-d901ad222ea1-operator-scripts\") pod \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060193 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzt6t\" (UniqueName: \"kubernetes.io/projected/03d869cb-334d-4d2a-917c-25fe86c0610b-kube-api-access-bzt6t\") pod \"03d869cb-334d-4d2a-917c-25fe86c0610b\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grdds\" (UniqueName: \"kubernetes.io/projected/11048d75-f33a-45c7-867a-a1ba4eb48e52-kube-api-access-grdds\") pod \"11048d75-f33a-45c7-867a-a1ba4eb48e52\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzq2s\" (UniqueName: \"kubernetes.io/projected/19bc9850-2467-4ba0-a2bd-d901ad222ea1-kube-api-access-gzq2s\") pod \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\" (UID: \"19bc9850-2467-4ba0-a2bd-d901ad222ea1\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060324 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-config\") pod \"a0486055-d9f4-43dd-a25e-16549d574740\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-operator-scripts\") pod \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060401 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-nb\") pod \"a0486055-d9f4-43dd-a25e-16549d574740\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060474 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11048d75-f33a-45c7-867a-a1ba4eb48e52-operator-scripts\") pod \"11048d75-f33a-45c7-867a-a1ba4eb48e52\" (UID: \"11048d75-f33a-45c7-867a-a1ba4eb48e52\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060538 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxjwt\" (UniqueName: \"kubernetes.io/projected/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-kube-api-access-zxjwt\") pod \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\" (UID: \"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060635 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-dns-svc\") pod \"a0486055-d9f4-43dd-a25e-16549d574740\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060699 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-sb\") pod \"a0486055-d9f4-43dd-a25e-16549d574740\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.060762 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb58t\" (UniqueName: \"kubernetes.io/projected/a0486055-d9f4-43dd-a25e-16549d574740-kube-api-access-nb58t\") pod \"a0486055-d9f4-43dd-a25e-16549d574740\" (UID: \"a0486055-d9f4-43dd-a25e-16549d574740\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.061059 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d869cb-334d-4d2a-917c-25fe86c0610b-operator-scripts\") pod \"03d869cb-334d-4d2a-917c-25fe86c0610b\" (UID: \"03d869cb-334d-4d2a-917c-25fe86c0610b\") " Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.062429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19bc9850-2467-4ba0-a2bd-d901ad222ea1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19bc9850-2467-4ba0-a2bd-d901ad222ea1" (UID: "19bc9850-2467-4ba0-a2bd-d901ad222ea1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.062455 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03d869cb-334d-4d2a-917c-25fe86c0610b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03d869cb-334d-4d2a-917c-25fe86c0610b" (UID: "03d869cb-334d-4d2a-917c-25fe86c0610b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.065986 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" (UID: "dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.066302 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11048d75-f33a-45c7-867a-a1ba4eb48e52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11048d75-f33a-45c7-867a-a1ba4eb48e52" (UID: "11048d75-f33a-45c7-867a-a1ba4eb48e52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.066718 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11048d75-f33a-45c7-867a-a1ba4eb48e52-kube-api-access-grdds" (OuterVolumeSpecName: "kube-api-access-grdds") pod "11048d75-f33a-45c7-867a-a1ba4eb48e52" (UID: "11048d75-f33a-45c7-867a-a1ba4eb48e52"). InnerVolumeSpecName "kube-api-access-grdds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.067546 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19bc9850-2467-4ba0-a2bd-d901ad222ea1-kube-api-access-gzq2s" (OuterVolumeSpecName: "kube-api-access-gzq2s") pod "19bc9850-2467-4ba0-a2bd-d901ad222ea1" (UID: "19bc9850-2467-4ba0-a2bd-d901ad222ea1"). InnerVolumeSpecName "kube-api-access-gzq2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.069704 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d869cb-334d-4d2a-917c-25fe86c0610b-kube-api-access-bzt6t" (OuterVolumeSpecName: "kube-api-access-bzt6t") pod "03d869cb-334d-4d2a-917c-25fe86c0610b" (UID: "03d869cb-334d-4d2a-917c-25fe86c0610b"). InnerVolumeSpecName "kube-api-access-bzt6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.073618 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0486055-d9f4-43dd-a25e-16549d574740-kube-api-access-nb58t" (OuterVolumeSpecName: "kube-api-access-nb58t") pod "a0486055-d9f4-43dd-a25e-16549d574740" (UID: "a0486055-d9f4-43dd-a25e-16549d574740"). InnerVolumeSpecName "kube-api-access-nb58t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.073675 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-kube-api-access-zxjwt" (OuterVolumeSpecName: "kube-api-access-zxjwt") pod "dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" (UID: "dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb"). InnerVolumeSpecName "kube-api-access-zxjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.130509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d9e5-account-create-update-s49bs" event={"ID":"03d869cb-334d-4d2a-917c-25fe86c0610b","Type":"ContainerDied","Data":"aec9766ce538eddf780c0c12cc697a38dde2e62e0fa3474f81aed8082d32b3b2"} Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.130542 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec9766ce538eddf780c0c12cc697a38dde2e62e0fa3474f81aed8082d32b3b2" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.130588 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9e5-account-create-update-s49bs" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.138918 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-z9mm9" event={"ID":"19bc9850-2467-4ba0-a2bd-d901ad222ea1","Type":"ContainerDied","Data":"8a419430c14329bcbe60439b5276e919f8c09528e1751dc4792f3ceb7fec7db4"} Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.138966 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a419430c14329bcbe60439b5276e919f8c09528e1751dc4792f3ceb7fec7db4" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.139027 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-z9mm9" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.148501 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-xbqq4" event={"ID":"a0486055-d9f4-43dd-a25e-16549d574740","Type":"ContainerDied","Data":"8263d235e37deb72409636c3b7bf917783b2880dacedf3148d33a2eedd1a67e3"} Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.148572 4833 scope.go:117] "RemoveContainer" containerID="49d1d5cc642cb755cd933c9fb3ef20da52cdf686cbb39256b8931f7e8c755f8f" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.148741 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-xbqq4" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.155913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-402d-account-create-update-8pv4k" event={"ID":"11048d75-f33a-45c7-867a-a1ba4eb48e52","Type":"ContainerDied","Data":"4d682fc6584f42719ac2c5eec50c4bd50ad35d116a7e2a3b2a00bfdc8c6f005e"} Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.155948 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d682fc6584f42719ac2c5eec50c4bd50ad35d116a7e2a3b2a00bfdc8c6f005e" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.156012 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-402d-account-create-update-8pv4k" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.160462 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t27br" event={"ID":"dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb","Type":"ContainerDied","Data":"4b3f1f80d026d8c59ac0da8d3dc0371d71b3c4e80c864eda763e24c461ea2fe8"} Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.160509 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3f1f80d026d8c59ac0da8d3dc0371d71b3c4e80c864eda763e24c461ea2fe8" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.160609 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t27br" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163382 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03d869cb-334d-4d2a-917c-25fe86c0610b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163405 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bc9850-2467-4ba0-a2bd-d901ad222ea1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163419 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzt6t\" (UniqueName: \"kubernetes.io/projected/03d869cb-334d-4d2a-917c-25fe86c0610b-kube-api-access-bzt6t\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163431 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grdds\" (UniqueName: \"kubernetes.io/projected/11048d75-f33a-45c7-867a-a1ba4eb48e52-kube-api-access-grdds\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163463 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzq2s\" (UniqueName: \"kubernetes.io/projected/19bc9850-2467-4ba0-a2bd-d901ad222ea1-kube-api-access-gzq2s\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163477 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163488 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11048d75-f33a-45c7-867a-a1ba4eb48e52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163499 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxjwt\" (UniqueName: \"kubernetes.io/projected/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb-kube-api-access-zxjwt\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.163510 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb58t\" (UniqueName: \"kubernetes.io/projected/a0486055-d9f4-43dd-a25e-16549d574740-kube-api-access-nb58t\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.165651 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a0486055-d9f4-43dd-a25e-16549d574740" (UID: "a0486055-d9f4-43dd-a25e-16549d574740"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.166625 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a0486055-d9f4-43dd-a25e-16549d574740" (UID: "a0486055-d9f4-43dd-a25e-16549d574740"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.171178 4833 scope.go:117] "RemoveContainer" containerID="bbcd39c98861f1b218a15ed081d554049272378619182ed6e2cf52a029da39e5" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.184090 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a0486055-d9f4-43dd-a25e-16549d574740" (UID: "a0486055-d9f4-43dd-a25e-16549d574740"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.211339 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-config" (OuterVolumeSpecName: "config") pod "a0486055-d9f4-43dd-a25e-16549d574740" (UID: "a0486055-d9f4-43dd-a25e-16549d574740"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.265296 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.265594 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.265604 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.265614 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a0486055-d9f4-43dd-a25e-16549d574740-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.399545 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qtxmt"] Jan 27 14:31:04 crc kubenswrapper[4833]: W0127 14:31:04.399817 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4502e8ab_7ccb_428d_a8e3_6c5df79bacc1.slice/crio-510ba676c91620aff1e6a0aea7056b6c7bd3fca40b3bbf62f3e7d397f4ff5aeb WatchSource:0}: Error finding container 510ba676c91620aff1e6a0aea7056b6c7bd3fca40b3bbf62f3e7d397f4ff5aeb: Status 404 returned error can't find the container with id 510ba676c91620aff1e6a0aea7056b6c7bd3fca40b3bbf62f3e7d397f4ff5aeb Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.490762 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xbqq4"] Jan 27 14:31:04 crc kubenswrapper[4833]: I0127 14:31:04.498341 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-xbqq4"] Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.205724 4833 generic.go:334] "Generic (PLEG): container finished" podID="4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" containerID="64567016eb39649dd5fafa726e572fc1e7050b9a129371be6987bf8897eb27b6" exitCode=0 Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.205989 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtxmt" event={"ID":"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1","Type":"ContainerDied","Data":"64567016eb39649dd5fafa726e572fc1e7050b9a129371be6987bf8897eb27b6"} Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.206015 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtxmt" event={"ID":"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1","Type":"ContainerStarted","Data":"510ba676c91620aff1e6a0aea7056b6c7bd3fca40b3bbf62f3e7d397f4ff5aeb"} Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.240159 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0486055-d9f4-43dd-a25e-16549d574740" path="/var/lib/kubelet/pods/a0486055-d9f4-43dd-a25e-16549d574740/volumes" Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.240725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerStarted","Data":"21b7ee68b64ac1e968393a57f80245091770e7cfa784ac7779fbe4e16f96c367"} Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.267852 4833 generic.go:334] "Generic (PLEG): container finished" podID="11a66058-c0bb-4357-a752-8823939d7ee3" containerID="da30d6c82ce0b7e512dd13b4c02fba86077f7735700a18c1da6573968f4974aa" exitCode=0 Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.267905 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4cbg4" event={"ID":"11a66058-c0bb-4357-a752-8823939d7ee3","Type":"ContainerDied","Data":"da30d6c82ce0b7e512dd13b4c02fba86077f7735700a18c1da6573968f4974aa"} Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.598585 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.614361 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df43a2ef-c36c-4b08-bee6-6820e443220c-etc-swift\") pod \"swift-storage-0\" (UID: \"df43a2ef-c36c-4b08-bee6-6820e443220c\") " pod="openstack/swift-storage-0" Jan 27 14:31:05 crc kubenswrapper[4833]: I0127 14:31:05.891781 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.258983 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kmtpk"] Jan 27 14:31:06 crc kubenswrapper[4833]: E0127 14:31:06.259570 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19bc9850-2467-4ba0-a2bd-d901ad222ea1" containerName="mariadb-database-create" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259582 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="19bc9850-2467-4ba0-a2bd-d901ad222ea1" containerName="mariadb-database-create" Jan 27 14:31:06 crc kubenswrapper[4833]: E0127 14:31:06.259595 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11048d75-f33a-45c7-867a-a1ba4eb48e52" containerName="mariadb-account-create-update" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259601 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="11048d75-f33a-45c7-867a-a1ba4eb48e52" containerName="mariadb-account-create-update" Jan 27 14:31:06 crc kubenswrapper[4833]: E0127 14:31:06.259613 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="dnsmasq-dns" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259619 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="dnsmasq-dns" Jan 27 14:31:06 crc kubenswrapper[4833]: E0127 14:31:06.259627 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="init" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259633 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="init" Jan 27 14:31:06 crc kubenswrapper[4833]: E0127 14:31:06.259642 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" containerName="mariadb-database-create" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259649 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" containerName="mariadb-database-create" Jan 27 14:31:06 crc kubenswrapper[4833]: E0127 14:31:06.259657 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d869cb-334d-4d2a-917c-25fe86c0610b" containerName="mariadb-account-create-update" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259672 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d869cb-334d-4d2a-917c-25fe86c0610b" containerName="mariadb-account-create-update" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259824 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d869cb-334d-4d2a-917c-25fe86c0610b" containerName="mariadb-account-create-update" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259836 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="11048d75-f33a-45c7-867a-a1ba4eb48e52" containerName="mariadb-account-create-update" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259847 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="dnsmasq-dns" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259863 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="19bc9850-2467-4ba0-a2bd-d901ad222ea1" containerName="mariadb-database-create" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.259870 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" containerName="mariadb-database-create" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.260457 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.283131 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kmtpk"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.330264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lxw\" (UniqueName: \"kubernetes.io/projected/e4d43701-074b-492e-82bc-7745956bb701-kube-api-access-n6lxw\") pod \"keystone-db-create-kmtpk\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.330435 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d43701-074b-492e-82bc-7745956bb701-operator-scripts\") pod \"keystone-db-create-kmtpk\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.374722 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f0ec-account-create-update-ptncm"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.375683 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.380829 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.435472 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d43701-074b-492e-82bc-7745956bb701-operator-scripts\") pod \"keystone-db-create-kmtpk\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.435567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lxw\" (UniqueName: \"kubernetes.io/projected/e4d43701-074b-492e-82bc-7745956bb701-kube-api-access-n6lxw\") pod \"keystone-db-create-kmtpk\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.435630 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6htr\" (UniqueName: \"kubernetes.io/projected/84b22d86-0d40-49e9-ac4d-7cc87d99c800-kube-api-access-h6htr\") pod \"keystone-f0ec-account-create-update-ptncm\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.435712 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84b22d86-0d40-49e9-ac4d-7cc87d99c800-operator-scripts\") pod \"keystone-f0ec-account-create-update-ptncm\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.436731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d43701-074b-492e-82bc-7745956bb701-operator-scripts\") pod \"keystone-db-create-kmtpk\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.467269 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lxw\" (UniqueName: \"kubernetes.io/projected/e4d43701-074b-492e-82bc-7745956bb701-kube-api-access-n6lxw\") pod \"keystone-db-create-kmtpk\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.491120 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f0ec-account-create-update-ptncm"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.509751 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-224ld"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.510708 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.539738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84b22d86-0d40-49e9-ac4d-7cc87d99c800-operator-scripts\") pod \"keystone-f0ec-account-create-update-ptncm\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.539849 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6htr\" (UniqueName: \"kubernetes.io/projected/84b22d86-0d40-49e9-ac4d-7cc87d99c800-kube-api-access-h6htr\") pod \"keystone-f0ec-account-create-update-ptncm\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.539894 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8k2m\" (UniqueName: \"kubernetes.io/projected/16db8127-7542-4d49-bd70-1b2a994d842e-kube-api-access-p8k2m\") pod \"placement-db-create-224ld\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.539914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16db8127-7542-4d49-bd70-1b2a994d842e-operator-scripts\") pod \"placement-db-create-224ld\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.540537 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84b22d86-0d40-49e9-ac4d-7cc87d99c800-operator-scripts\") pod \"keystone-f0ec-account-create-update-ptncm\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.544664 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-224ld"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.587606 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.622411 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7dd7-account-create-update-b599r"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.626416 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.630322 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.632324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6htr\" (UniqueName: \"kubernetes.io/projected/84b22d86-0d40-49e9-ac4d-7cc87d99c800-kube-api-access-h6htr\") pod \"keystone-f0ec-account-create-update-ptncm\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.634515 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7dd7-account-create-update-b599r"] Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.643860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8k2m\" (UniqueName: \"kubernetes.io/projected/16db8127-7542-4d49-bd70-1b2a994d842e-kube-api-access-p8k2m\") pod \"placement-db-create-224ld\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.645548 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16db8127-7542-4d49-bd70-1b2a994d842e-operator-scripts\") pod \"placement-db-create-224ld\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.646651 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16db8127-7542-4d49-bd70-1b2a994d842e-operator-scripts\") pod \"placement-db-create-224ld\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.657126 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:31:06 crc kubenswrapper[4833]: W0127 14:31:06.659844 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf43a2ef_c36c_4b08_bee6_6820e443220c.slice/crio-675543a1667ff8801518b280e95df5cbdfeabece02050b14d180936ec65b29ef WatchSource:0}: Error finding container 675543a1667ff8801518b280e95df5cbdfeabece02050b14d180936ec65b29ef: Status 404 returned error can't find the container with id 675543a1667ff8801518b280e95df5cbdfeabece02050b14d180936ec65b29ef Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.666208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8k2m\" (UniqueName: \"kubernetes.io/projected/16db8127-7542-4d49-bd70-1b2a994d842e-kube-api-access-p8k2m\") pod \"placement-db-create-224ld\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.707943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.747927 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfq7q\" (UniqueName: \"kubernetes.io/projected/4c77aee3-afff-40bc-b8c8-963be9bc87ab-kube-api-access-wfq7q\") pod \"placement-7dd7-account-create-update-b599r\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.748113 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c77aee3-afff-40bc-b8c8-963be9bc87ab-operator-scripts\") pod \"placement-7dd7-account-create-update-b599r\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.833800 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-224ld" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.853593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c77aee3-afff-40bc-b8c8-963be9bc87ab-operator-scripts\") pod \"placement-7dd7-account-create-update-b599r\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.853680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfq7q\" (UniqueName: \"kubernetes.io/projected/4c77aee3-afff-40bc-b8c8-963be9bc87ab-kube-api-access-wfq7q\") pod \"placement-7dd7-account-create-update-b599r\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.854499 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c77aee3-afff-40bc-b8c8-963be9bc87ab-operator-scripts\") pod \"placement-7dd7-account-create-update-b599r\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.869328 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.877114 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfq7q\" (UniqueName: \"kubernetes.io/projected/4c77aee3-afff-40bc-b8c8-963be9bc87ab-kube-api-access-wfq7q\") pod \"placement-7dd7-account-create-update-b599r\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.954903 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-operator-scripts\") pod \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.955025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwdwf\" (UniqueName: \"kubernetes.io/projected/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-kube-api-access-kwdwf\") pod \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\" (UID: \"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1\") " Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.955712 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" (UID: "4502e8ab-7ccb-428d-a8e3-6c5df79bacc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.956215 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.974021 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-kube-api-access-kwdwf" (OuterVolumeSpecName: "kube-api-access-kwdwf") pod "4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" (UID: "4502e8ab-7ccb-428d-a8e3-6c5df79bacc1"). InnerVolumeSpecName "kube-api-access-kwdwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:06 crc kubenswrapper[4833]: I0127 14:31:06.996131 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.059596 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwdwf\" (UniqueName: \"kubernetes.io/projected/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1-kube-api-access-kwdwf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.085839 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.150397 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kmtpk"] Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160431 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-swiftconf\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160603 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-combined-ca-bundle\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160646 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-ring-data-devices\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160673 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11a66058-c0bb-4357-a752-8823939d7ee3-etc-swift\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160690 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-dispersionconf\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160723 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-scripts\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.160743 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/11a66058-c0bb-4357-a752-8823939d7ee3-kube-api-access-qhx84\") pod \"11a66058-c0bb-4357-a752-8823939d7ee3\" (UID: \"11a66058-c0bb-4357-a752-8823939d7ee3\") " Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.163561 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.163735 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11a66058-c0bb-4357-a752-8823939d7ee3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.184969 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.200929 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a66058-c0bb-4357-a752-8823939d7ee3-kube-api-access-qhx84" (OuterVolumeSpecName: "kube-api-access-qhx84") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "kube-api-access-qhx84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.203063 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-26wng"] Jan 27 14:31:07 crc kubenswrapper[4833]: E0127 14:31:07.203604 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a66058-c0bb-4357-a752-8823939d7ee3" containerName="swift-ring-rebalance" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.203627 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a66058-c0bb-4357-a752-8823939d7ee3" containerName="swift-ring-rebalance" Jan 27 14:31:07 crc kubenswrapper[4833]: E0127 14:31:07.203654 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" containerName="mariadb-account-create-update" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.203661 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" containerName="mariadb-account-create-update" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.203929 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" containerName="mariadb-account-create-update" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.203948 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a66058-c0bb-4357-a752-8823939d7ee3" containerName="swift-ring-rebalance" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.204693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.209153 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7fbqk" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.209394 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.223251 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.236970 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-db-sync-config-data\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264292 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2jp\" (UniqueName: \"kubernetes.io/projected/69d497f9-964a-4818-9f39-09cf9a0f83fb-kube-api-access-rc2jp\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264352 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-combined-ca-bundle\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264371 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-config-data\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264485 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264522 4833 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264531 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11a66058-c0bb-4357-a752-8823939d7ee3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264539 4833 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264547 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/11a66058-c0bb-4357-a752-8823939d7ee3-kube-api-access-qhx84\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.264557 4833 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11a66058-c0bb-4357-a752-8823939d7ee3-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.267226 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-26wng"] Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.274662 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-scripts" (OuterVolumeSpecName: "scripts") pod "11a66058-c0bb-4357-a752-8823939d7ee3" (UID: "11a66058-c0bb-4357-a752-8823939d7ee3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.323638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"675543a1667ff8801518b280e95df5cbdfeabece02050b14d180936ec65b29ef"} Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.323995 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f0ec-account-create-update-ptncm"] Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.336342 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtxmt" event={"ID":"4502e8ab-7ccb-428d-a8e3-6c5df79bacc1","Type":"ContainerDied","Data":"510ba676c91620aff1e6a0aea7056b6c7bd3fca40b3bbf62f3e7d397f4ff5aeb"} Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.336371 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="510ba676c91620aff1e6a0aea7056b6c7bd3fca40b3bbf62f3e7d397f4ff5aeb" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.336437 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtxmt" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.338433 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kmtpk" event={"ID":"e4d43701-074b-492e-82bc-7745956bb701","Type":"ContainerStarted","Data":"fb0a144abeea7198489c3d6b8abdab614f278c5cdf1720503403df713928c98d"} Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.340204 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerStarted","Data":"75005c2907a91543aa2a3dad9b9cf40eed33e480e5643780722a7a44e6933832"} Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.344603 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-4cbg4" event={"ID":"11a66058-c0bb-4357-a752-8823939d7ee3","Type":"ContainerDied","Data":"36acf3e425b5830bf86dfa2c51bb90231c513f2d3ceaffe85c012b6421015769"} Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.344628 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36acf3e425b5830bf86dfa2c51bb90231c513f2d3ceaffe85c012b6421015769" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.344686 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4cbg4" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.367970 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-combined-ca-bundle\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.368027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-config-data\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.368165 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-db-sync-config-data\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.368216 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2jp\" (UniqueName: \"kubernetes.io/projected/69d497f9-964a-4818-9f39-09cf9a0f83fb-kube-api-access-rc2jp\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.368314 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11a66058-c0bb-4357-a752-8823939d7ee3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.373152 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-config-data\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.373182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-combined-ca-bundle\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.376321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-db-sync-config-data\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.389311 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2jp\" (UniqueName: \"kubernetes.io/projected/69d497f9-964a-4818-9f39-09cf9a0f83fb-kube-api-access-rc2jp\") pod \"glance-db-sync-26wng\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.397168 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-224ld"] Jan 27 14:31:07 crc kubenswrapper[4833]: W0127 14:31:07.399956 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16db8127_7542_4d49_bd70_1b2a994d842e.slice/crio-28635c2fd3b6d5931581ce0a58f99d58c08f011b6a02cf625c0d0c1dfca85e89 WatchSource:0}: Error finding container 28635c2fd3b6d5931581ce0a58f99d58c08f011b6a02cf625c0d0c1dfca85e89: Status 404 returned error can't find the container with id 28635c2fd3b6d5931581ce0a58f99d58c08f011b6a02cf625c0d0c1dfca85e89 Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.535635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-26wng" Jan 27 14:31:07 crc kubenswrapper[4833]: I0127 14:31:07.613144 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7dd7-account-create-update-b599r"] Jan 27 14:31:07 crc kubenswrapper[4833]: W0127 14:31:07.636785 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c77aee3_afff_40bc_b8c8_963be9bc87ab.slice/crio-1f16d19975b50f49157cd9b42750389171dec151e5e8bcfb7afca5a3d3c86057 WatchSource:0}: Error finding container 1f16d19975b50f49157cd9b42750389171dec151e5e8bcfb7afca5a3d3c86057: Status 404 returned error can't find the container with id 1f16d19975b50f49157cd9b42750389171dec151e5e8bcfb7afca5a3d3c86057 Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.001726 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-xbqq4" podUID="a0486055-d9f4-43dd-a25e-16549d574740" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: i/o timeout" Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.168218 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-26wng"] Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.355253 4833 generic.go:334] "Generic (PLEG): container finished" podID="e4d43701-074b-492e-82bc-7745956bb701" containerID="973e1ab5f561584a4835e0fb84bcb4f7e86dc32694b42b07ca4067a87b4df2f4" exitCode=0 Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.355941 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kmtpk" event={"ID":"e4d43701-074b-492e-82bc-7745956bb701","Type":"ContainerDied","Data":"973e1ab5f561584a4835e0fb84bcb4f7e86dc32694b42b07ca4067a87b4df2f4"} Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.357195 4833 generic.go:334] "Generic (PLEG): container finished" podID="4c77aee3-afff-40bc-b8c8-963be9bc87ab" containerID="7cbf4a9ba25601a662054bebcb7956e9ac06e51f9392f8953e54d8d8c7f9391d" exitCode=0 Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.357238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd7-account-create-update-b599r" event={"ID":"4c77aee3-afff-40bc-b8c8-963be9bc87ab","Type":"ContainerDied","Data":"7cbf4a9ba25601a662054bebcb7956e9ac06e51f9392f8953e54d8d8c7f9391d"} Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.357256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd7-account-create-update-b599r" event={"ID":"4c77aee3-afff-40bc-b8c8-963be9bc87ab","Type":"ContainerStarted","Data":"1f16d19975b50f49157cd9b42750389171dec151e5e8bcfb7afca5a3d3c86057"} Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.358799 4833 generic.go:334] "Generic (PLEG): container finished" podID="16db8127-7542-4d49-bd70-1b2a994d842e" containerID="aa30d4e02508c4d2fd8ca082f9368ae6afac346d7b4cae961ef2623329e42894" exitCode=0 Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.358853 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-224ld" event={"ID":"16db8127-7542-4d49-bd70-1b2a994d842e","Type":"ContainerDied","Data":"aa30d4e02508c4d2fd8ca082f9368ae6afac346d7b4cae961ef2623329e42894"} Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.358873 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-224ld" event={"ID":"16db8127-7542-4d49-bd70-1b2a994d842e","Type":"ContainerStarted","Data":"28635c2fd3b6d5931581ce0a58f99d58c08f011b6a02cf625c0d0c1dfca85e89"} Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.361624 4833 generic.go:334] "Generic (PLEG): container finished" podID="84b22d86-0d40-49e9-ac4d-7cc87d99c800" containerID="95310040e5d8f55d783ab342132e7b92bc5a9500abfe3f99bd11242874fee475" exitCode=0 Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.361661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f0ec-account-create-update-ptncm" event={"ID":"84b22d86-0d40-49e9-ac4d-7cc87d99c800","Type":"ContainerDied","Data":"95310040e5d8f55d783ab342132e7b92bc5a9500abfe3f99bd11242874fee475"} Jan 27 14:31:08 crc kubenswrapper[4833]: I0127 14:31:08.361680 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f0ec-account-create-update-ptncm" event={"ID":"84b22d86-0d40-49e9-ac4d-7cc87d99c800","Type":"ContainerStarted","Data":"bc72cfc4a3b3cead77bee21b8520e04f4fabc5cb247a8275fb85bb04212be010"} Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.382715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-26wng" event={"ID":"69d497f9-964a-4818-9f39-09cf9a0f83fb","Type":"ContainerStarted","Data":"a9fb0a247fd2468652cccac7d3ea7bebadae8119477daf0237cdaf6fc55a29b2"} Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.385221 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"d6a0d2292c88a7f3a9e11ce4e4dd6b98df4ee6516d09978fa3510487380ac893"} Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.385242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"b323c1f7411fc8f30ebabfb63f520da91c316a6919d1751c4d5a05b2b07941e8"} Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.385251 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"6a0ab6b39f2ec967941e70d6b69147ff4bfc0161916f63887784a9d81a8d1cdf"} Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.385260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"29208b52e020d0bfdc4f0eeac84f7bfa4f31166649bd137fa82c0a7eecad1ca7"} Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.967321 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtxmt"] Jan 27 14:31:09 crc kubenswrapper[4833]: I0127 14:31:09.979307 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qtxmt"] Jan 27 14:31:10 crc kubenswrapper[4833]: I0127 14:31:10.395873 4833 generic.go:334] "Generic (PLEG): container finished" podID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerID="8490ab1952cf3713fa98897b3b243166e18516a777630355c0518129f22ce82d" exitCode=0 Jan 27 14:31:10 crc kubenswrapper[4833]: I0127 14:31:10.395969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b143505-7ef8-4e88-b977-8fc8e3471474","Type":"ContainerDied","Data":"8490ab1952cf3713fa98897b3b243166e18516a777630355c0518129f22ce82d"} Jan 27 14:31:10 crc kubenswrapper[4833]: I0127 14:31:10.400316 4833 generic.go:334] "Generic (PLEG): container finished" podID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerID="ebad24caa0ef0da71ac712df5c8ba0c36956bda6947f1bbc61a6a1deb89786ee" exitCode=0 Jan 27 14:31:10 crc kubenswrapper[4833]: I0127 14:31:10.400368 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea","Type":"ContainerDied","Data":"ebad24caa0ef0da71ac712df5c8ba0c36956bda6947f1bbc61a6a1deb89786ee"} Jan 27 14:31:10 crc kubenswrapper[4833]: I0127 14:31:10.536602 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.221572 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4502e8ab-7ccb-428d-a8e3-6c5df79bacc1" path="/var/lib/kubelet/pods/4502e8ab-7ccb-428d-a8e3-6c5df79bacc1/volumes" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.679471 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-224ld" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.685280 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.758108 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfq7q\" (UniqueName: \"kubernetes.io/projected/4c77aee3-afff-40bc-b8c8-963be9bc87ab-kube-api-access-wfq7q\") pod \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.758227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c77aee3-afff-40bc-b8c8-963be9bc87ab-operator-scripts\") pod \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\" (UID: \"4c77aee3-afff-40bc-b8c8-963be9bc87ab\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.758293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16db8127-7542-4d49-bd70-1b2a994d842e-operator-scripts\") pod \"16db8127-7542-4d49-bd70-1b2a994d842e\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.758415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8k2m\" (UniqueName: \"kubernetes.io/projected/16db8127-7542-4d49-bd70-1b2a994d842e-kube-api-access-p8k2m\") pod \"16db8127-7542-4d49-bd70-1b2a994d842e\" (UID: \"16db8127-7542-4d49-bd70-1b2a994d842e\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.758918 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c77aee3-afff-40bc-b8c8-963be9bc87ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c77aee3-afff-40bc-b8c8-963be9bc87ab" (UID: "4c77aee3-afff-40bc-b8c8-963be9bc87ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.758928 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16db8127-7542-4d49-bd70-1b2a994d842e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16db8127-7542-4d49-bd70-1b2a994d842e" (UID: "16db8127-7542-4d49-bd70-1b2a994d842e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.763465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c77aee3-afff-40bc-b8c8-963be9bc87ab-kube-api-access-wfq7q" (OuterVolumeSpecName: "kube-api-access-wfq7q") pod "4c77aee3-afff-40bc-b8c8-963be9bc87ab" (UID: "4c77aee3-afff-40bc-b8c8-963be9bc87ab"). InnerVolumeSpecName "kube-api-access-wfq7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.763712 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16db8127-7542-4d49-bd70-1b2a994d842e-kube-api-access-p8k2m" (OuterVolumeSpecName: "kube-api-access-p8k2m") pod "16db8127-7542-4d49-bd70-1b2a994d842e" (UID: "16db8127-7542-4d49-bd70-1b2a994d842e"). InnerVolumeSpecName "kube-api-access-p8k2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.802556 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6584j" podUID="71bea80e-a86d-40c6-b72f-9bab663cc6ea" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:31:11 crc kubenswrapper[4833]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:31:11 crc kubenswrapper[4833]: > Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.809127 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.848758 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.863638 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d43701-074b-492e-82bc-7745956bb701-operator-scripts\") pod \"e4d43701-074b-492e-82bc-7745956bb701\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.863856 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6lxw\" (UniqueName: \"kubernetes.io/projected/e4d43701-074b-492e-82bc-7745956bb701-kube-api-access-n6lxw\") pod \"e4d43701-074b-492e-82bc-7745956bb701\" (UID: \"e4d43701-074b-492e-82bc-7745956bb701\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.864170 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c77aee3-afff-40bc-b8c8-963be9bc87ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.864185 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16db8127-7542-4d49-bd70-1b2a994d842e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.864195 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8k2m\" (UniqueName: \"kubernetes.io/projected/16db8127-7542-4d49-bd70-1b2a994d842e-kube-api-access-p8k2m\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.864205 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfq7q\" (UniqueName: \"kubernetes.io/projected/4c77aee3-afff-40bc-b8c8-963be9bc87ab-kube-api-access-wfq7q\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.864790 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d43701-074b-492e-82bc-7745956bb701-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4d43701-074b-492e-82bc-7745956bb701" (UID: "e4d43701-074b-492e-82bc-7745956bb701"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.874593 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d43701-074b-492e-82bc-7745956bb701-kube-api-access-n6lxw" (OuterVolumeSpecName: "kube-api-access-n6lxw") pod "e4d43701-074b-492e-82bc-7745956bb701" (UID: "e4d43701-074b-492e-82bc-7745956bb701"). InnerVolumeSpecName "kube-api-access-n6lxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.965818 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84b22d86-0d40-49e9-ac4d-7cc87d99c800-operator-scripts\") pod \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.965929 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6htr\" (UniqueName: \"kubernetes.io/projected/84b22d86-0d40-49e9-ac4d-7cc87d99c800-kube-api-access-h6htr\") pod \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\" (UID: \"84b22d86-0d40-49e9-ac4d-7cc87d99c800\") " Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.966241 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6lxw\" (UniqueName: \"kubernetes.io/projected/e4d43701-074b-492e-82bc-7745956bb701-kube-api-access-n6lxw\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.966258 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4d43701-074b-492e-82bc-7745956bb701-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.968303 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84b22d86-0d40-49e9-ac4d-7cc87d99c800-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84b22d86-0d40-49e9-ac4d-7cc87d99c800" (UID: "84b22d86-0d40-49e9-ac4d-7cc87d99c800"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:11 crc kubenswrapper[4833]: I0127 14:31:11.977646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b22d86-0d40-49e9-ac4d-7cc87d99c800-kube-api-access-h6htr" (OuterVolumeSpecName: "kube-api-access-h6htr") pod "84b22d86-0d40-49e9-ac4d-7cc87d99c800" (UID: "84b22d86-0d40-49e9-ac4d-7cc87d99c800"). InnerVolumeSpecName "kube-api-access-h6htr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.069533 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84b22d86-0d40-49e9-ac4d-7cc87d99c800-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.069703 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6htr\" (UniqueName: \"kubernetes.io/projected/84b22d86-0d40-49e9-ac4d-7cc87d99c800-kube-api-access-h6htr\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.418258 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerStarted","Data":"44459fc8bdb87c753a7a512c9010e64d2f4e66a386367e7f526ec47fdbb94134"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.423328 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f0ec-account-create-update-ptncm" event={"ID":"84b22d86-0d40-49e9-ac4d-7cc87d99c800","Type":"ContainerDied","Data":"bc72cfc4a3b3cead77bee21b8520e04f4fabc5cb247a8275fb85bb04212be010"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.423372 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f0ec-account-create-update-ptncm" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.423381 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc72cfc4a3b3cead77bee21b8520e04f4fabc5cb247a8275fb85bb04212be010" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.433429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea","Type":"ContainerStarted","Data":"baefb3371f995e5614217f2f79e7b17f9c5402013bdb96b3d067d8592c2136fa"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.434297 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.446868 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kmtpk" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.447616 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kmtpk" event={"ID":"e4d43701-074b-492e-82bc-7745956bb701","Type":"ContainerDied","Data":"fb0a144abeea7198489c3d6b8abdab614f278c5cdf1720503403df713928c98d"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.447839 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0a144abeea7198489c3d6b8abdab614f278c5cdf1720503403df713928c98d" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.450155 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=9.163619063 podStartE2EDuration="54.450132911s" podCreationTimestamp="2026-01-27 14:30:18 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.363396768 +0000 UTC m=+1128.014721170" lastFinishedPulling="2026-01-27 14:31:11.649910626 +0000 UTC m=+1173.301235018" observedRunningTime="2026-01-27 14:31:12.448679816 +0000 UTC m=+1174.100004238" watchObservedRunningTime="2026-01-27 14:31:12.450132911 +0000 UTC m=+1174.101457313" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.450817 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dd7-account-create-update-b599r" event={"ID":"4c77aee3-afff-40bc-b8c8-963be9bc87ab","Type":"ContainerDied","Data":"1f16d19975b50f49157cd9b42750389171dec151e5e8bcfb7afca5a3d3c86057"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.450888 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f16d19975b50f49157cd9b42750389171dec151e5e8bcfb7afca5a3d3c86057" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.451085 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dd7-account-create-update-b599r" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.453833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-224ld" event={"ID":"16db8127-7542-4d49-bd70-1b2a994d842e","Type":"ContainerDied","Data":"28635c2fd3b6d5931581ce0a58f99d58c08f011b6a02cf625c0d0c1dfca85e89"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.453873 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28635c2fd3b6d5931581ce0a58f99d58c08f011b6a02cf625c0d0c1dfca85e89" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.453964 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-224ld" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.463682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b143505-7ef8-4e88-b977-8fc8e3471474","Type":"ContainerStarted","Data":"f2c2ac8c4b8f5f8a2738bf99f826e32f49d5267d2a4f1954669c0a2a08017808"} Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.464282 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.495472 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.721533378 podStartE2EDuration="1m0.495456932s" podCreationTimestamp="2026-01-27 14:30:12 +0000 UTC" firstStartedPulling="2026-01-27 14:30:25.189438131 +0000 UTC m=+1126.840762533" lastFinishedPulling="2026-01-27 14:30:34.963361685 +0000 UTC m=+1136.614686087" observedRunningTime="2026-01-27 14:31:12.487700867 +0000 UTC m=+1174.139025279" watchObservedRunningTime="2026-01-27 14:31:12.495456932 +0000 UTC m=+1174.146781334" Jan 27 14:31:12 crc kubenswrapper[4833]: I0127 14:31:12.521531 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.931257779 podStartE2EDuration="1m0.521510575s" podCreationTimestamp="2026-01-27 14:30:12 +0000 UTC" firstStartedPulling="2026-01-27 14:30:26.112645281 +0000 UTC m=+1127.763969683" lastFinishedPulling="2026-01-27 14:30:34.702898077 +0000 UTC m=+1136.354222479" observedRunningTime="2026-01-27 14:31:12.513582035 +0000 UTC m=+1174.164906457" watchObservedRunningTime="2026-01-27 14:31:12.521510575 +0000 UTC m=+1174.172834977" Jan 27 14:31:13 crc kubenswrapper[4833]: I0127 14:31:13.481069 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"c3720f823e4574a966d5607bbe4546a92fd742d4cb49dd5949d69c28f289f38c"} Jan 27 14:31:13 crc kubenswrapper[4833]: I0127 14:31:13.481488 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"83bcc77949c01844076c884b1e95bc66500ba81faa6a4a022c43fc05d58f4102"} Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.492291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"4a002428f5ac86591a496e505cd5fd30061bc11ba8b1421b933745ffc6dccd12"} Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.492641 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"137f06d41b1615f336b8942e889d97a335e7aa36ed2001f31a9766a27311399b"} Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.850074 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.980153 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qmw5s"] Jan 27 14:31:14 crc kubenswrapper[4833]: E0127 14:31:14.981123 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c77aee3-afff-40bc-b8c8-963be9bc87ab" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981152 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c77aee3-afff-40bc-b8c8-963be9bc87ab" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4833]: E0127 14:31:14.981167 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16db8127-7542-4d49-bd70-1b2a994d842e" containerName="mariadb-database-create" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981177 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="16db8127-7542-4d49-bd70-1b2a994d842e" containerName="mariadb-database-create" Jan 27 14:31:14 crc kubenswrapper[4833]: E0127 14:31:14.981222 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b22d86-0d40-49e9-ac4d-7cc87d99c800" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981233 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b22d86-0d40-49e9-ac4d-7cc87d99c800" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4833]: E0127 14:31:14.981263 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d43701-074b-492e-82bc-7745956bb701" containerName="mariadb-database-create" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981275 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d43701-074b-492e-82bc-7745956bb701" containerName="mariadb-database-create" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981825 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c77aee3-afff-40bc-b8c8-963be9bc87ab" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981876 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="16db8127-7542-4d49-bd70-1b2a994d842e" containerName="mariadb-database-create" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981895 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b22d86-0d40-49e9-ac4d-7cc87d99c800" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.981930 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d43701-074b-492e-82bc-7745956bb701" containerName="mariadb-database-create" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.984611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:14 crc kubenswrapper[4833]: I0127 14:31:14.990542 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.012037 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qmw5s"] Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.126399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ae6c77-5b30-4091-b78e-6ce66768de51-operator-scripts\") pod \"root-account-create-update-qmw5s\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.126759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4b26\" (UniqueName: \"kubernetes.io/projected/61ae6c77-5b30-4091-b78e-6ce66768de51-kube-api-access-b4b26\") pod \"root-account-create-update-qmw5s\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.227927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4b26\" (UniqueName: \"kubernetes.io/projected/61ae6c77-5b30-4091-b78e-6ce66768de51-kube-api-access-b4b26\") pod \"root-account-create-update-qmw5s\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.228036 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ae6c77-5b30-4091-b78e-6ce66768de51-operator-scripts\") pod \"root-account-create-update-qmw5s\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.228828 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ae6c77-5b30-4091-b78e-6ce66768de51-operator-scripts\") pod \"root-account-create-update-qmw5s\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.247976 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4b26\" (UniqueName: \"kubernetes.io/projected/61ae6c77-5b30-4091-b78e-6ce66768de51-kube-api-access-b4b26\") pod \"root-account-create-update-qmw5s\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:15 crc kubenswrapper[4833]: I0127 14:31:15.340483 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:16 crc kubenswrapper[4833]: I0127 14:31:16.810939 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6584j" podUID="71bea80e-a86d-40c6-b72f-9bab663cc6ea" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:31:16 crc kubenswrapper[4833]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:31:16 crc kubenswrapper[4833]: > Jan 27 14:31:16 crc kubenswrapper[4833]: I0127 14:31:16.841567 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:31:16 crc kubenswrapper[4833]: I0127 14:31:16.849509 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9xc7c" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.081659 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6584j-config-kb89p"] Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.083383 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.085205 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.093207 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6584j-config-kb89p"] Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.158796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.158851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run-ovn\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.158883 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-log-ovn\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.158903 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-scripts\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.158961 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-additional-scripts\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.158999 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plcw\" (UniqueName: \"kubernetes.io/projected/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-kube-api-access-2plcw\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run-ovn\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-log-ovn\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-scripts\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260188 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-additional-scripts\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plcw\" (UniqueName: \"kubernetes.io/projected/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-kube-api-access-2plcw\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run-ovn\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.260996 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-additional-scripts\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.261001 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-log-ovn\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.262411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-scripts\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.298678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plcw\" (UniqueName: \"kubernetes.io/projected/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-kube-api-access-2plcw\") pod \"ovn-controller-6584j-config-kb89p\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:17 crc kubenswrapper[4833]: I0127 14:31:17.419407 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:19 crc kubenswrapper[4833]: I0127 14:31:19.849770 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:19 crc kubenswrapper[4833]: I0127 14:31:19.852935 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:20 crc kubenswrapper[4833]: I0127 14:31:20.555422 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:21 crc kubenswrapper[4833]: I0127 14:31:21.791711 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6584j" podUID="71bea80e-a86d-40c6-b72f-9bab663cc6ea" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:31:21 crc kubenswrapper[4833]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:31:21 crc kubenswrapper[4833]: > Jan 27 14:31:22 crc kubenswrapper[4833]: I0127 14:31:22.991459 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6584j-config-kb89p"] Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.147802 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qmw5s"] Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.357378 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.357821 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="prometheus" containerID="cri-o://21b7ee68b64ac1e968393a57f80245091770e7cfa784ac7779fbe4e16f96c367" gracePeriod=600 Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.358216 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="thanos-sidecar" containerID="cri-o://44459fc8bdb87c753a7a512c9010e64d2f4e66a386367e7f526ec47fdbb94134" gracePeriod=600 Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.358258 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="config-reloader" containerID="cri-o://75005c2907a91543aa2a3dad9b9cf40eed33e480e5643780722a7a44e6933832" gracePeriod=600 Jan 27 14:31:23 crc kubenswrapper[4833]: W0127 14:31:23.506710 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac38b3a3_8683_4ca3_b649_6c4b8e11a12c.slice/crio-6ee717dd8332df59a07f2d8d18b5a643ae103fed2ad05d96cd1462b08beb295b WatchSource:0}: Error finding container 6ee717dd8332df59a07f2d8d18b5a643ae103fed2ad05d96cd1462b08beb295b: Status 404 returned error can't find the container with id 6ee717dd8332df59a07f2d8d18b5a643ae103fed2ad05d96cd1462b08beb295b Jan 27 14:31:23 crc kubenswrapper[4833]: W0127 14:31:23.509591 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61ae6c77_5b30_4091_b78e_6ce66768de51.slice/crio-0d2cb281234ed518fd0f6eff1adb681107bde3636af178e0b42b3d7c1375e6a8 WatchSource:0}: Error finding container 0d2cb281234ed518fd0f6eff1adb681107bde3636af178e0b42b3d7c1375e6a8: Status 404 returned error can't find the container with id 0d2cb281234ed518fd0f6eff1adb681107bde3636af178e0b42b3d7c1375e6a8 Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.581782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qmw5s" event={"ID":"61ae6c77-5b30-4091-b78e-6ce66768de51","Type":"ContainerStarted","Data":"0d2cb281234ed518fd0f6eff1adb681107bde3636af178e0b42b3d7c1375e6a8"} Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.583550 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-26wng" event={"ID":"69d497f9-964a-4818-9f39-09cf9a0f83fb","Type":"ContainerStarted","Data":"d43ced6ff834553886ed37456bb08f98e09f184342a44034177188e0c002abae"} Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.585275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6584j-config-kb89p" event={"ID":"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c","Type":"ContainerStarted","Data":"6ee717dd8332df59a07f2d8d18b5a643ae103fed2ad05d96cd1462b08beb295b"} Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.591149 4833 generic.go:334] "Generic (PLEG): container finished" podID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerID="44459fc8bdb87c753a7a512c9010e64d2f4e66a386367e7f526ec47fdbb94134" exitCode=0 Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.591182 4833 generic.go:334] "Generic (PLEG): container finished" podID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerID="21b7ee68b64ac1e968393a57f80245091770e7cfa784ac7779fbe4e16f96c367" exitCode=0 Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.591212 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerDied","Data":"44459fc8bdb87c753a7a512c9010e64d2f4e66a386367e7f526ec47fdbb94134"} Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.591238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerDied","Data":"21b7ee68b64ac1e968393a57f80245091770e7cfa784ac7779fbe4e16f96c367"} Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.612723 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-26wng" podStartSLOduration=2.35205624 podStartE2EDuration="16.612698165s" podCreationTimestamp="2026-01-27 14:31:07 +0000 UTC" firstStartedPulling="2026-01-27 14:31:08.375668221 +0000 UTC m=+1170.026992623" lastFinishedPulling="2026-01-27 14:31:22.636310136 +0000 UTC m=+1184.287634548" observedRunningTime="2026-01-27 14:31:23.605837031 +0000 UTC m=+1185.257161433" watchObservedRunningTime="2026-01-27 14:31:23.612698165 +0000 UTC m=+1185.264022567" Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.726296 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:31:23 crc kubenswrapper[4833]: I0127 14:31:23.804654 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.185192 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fttzp"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.186523 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.221632 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5915-account-create-update-wd9r6"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.224574 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.233720 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.285371 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fttzp"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.291863 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-wchjj"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.295303 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.298556 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.298581 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-ldtld" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.307746 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5915-account-create-update-wd9r6"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.322810 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-wchjj"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.338485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqh2\" (UniqueName: \"kubernetes.io/projected/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-kube-api-access-xxqh2\") pod \"cinder-db-create-fttzp\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.338577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrj8\" (UniqueName: \"kubernetes.io/projected/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-kube-api-access-snrj8\") pod \"cinder-5915-account-create-update-wd9r6\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.338639 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-operator-scripts\") pod \"cinder-db-create-fttzp\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.338658 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-operator-scripts\") pod \"cinder-5915-account-create-update-wd9r6\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.344298 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d4xbb"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.345689 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.375770 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d4xbb"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.440285 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xjhh\" (UniqueName: \"kubernetes.io/projected/3af7b247-ce6a-494a-97d5-1d21afcf7727-kube-api-access-6xjhh\") pod \"barbican-db-create-d4xbb\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.440620 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqh2\" (UniqueName: \"kubernetes.io/projected/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-kube-api-access-xxqh2\") pod \"cinder-db-create-fttzp\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.440800 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-combined-ca-bundle\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.440923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af7b247-ce6a-494a-97d5-1d21afcf7727-operator-scripts\") pod \"barbican-db-create-d4xbb\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.441041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrj8\" (UniqueName: \"kubernetes.io/projected/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-kube-api-access-snrj8\") pod \"cinder-5915-account-create-update-wd9r6\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.441176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-db-sync-config-data\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.441310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-operator-scripts\") pod \"cinder-db-create-fttzp\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.441420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnhf\" (UniqueName: \"kubernetes.io/projected/3df071dc-eb5b-40dd-85ea-430f44ab198f-kube-api-access-twnhf\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.441558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-operator-scripts\") pod \"cinder-5915-account-create-update-wd9r6\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.441678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-config-data\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.442139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-operator-scripts\") pod \"cinder-db-create-fttzp\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.442316 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-operator-scripts\") pod \"cinder-5915-account-create-update-wd9r6\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.479196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqh2\" (UniqueName: \"kubernetes.io/projected/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-kube-api-access-xxqh2\") pod \"cinder-db-create-fttzp\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.495023 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrj8\" (UniqueName: \"kubernetes.io/projected/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-kube-api-access-snrj8\") pod \"cinder-5915-account-create-update-wd9r6\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.497629 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-10b4-account-create-update-hjwjt"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.498685 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.505748 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.517338 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.523092 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-10b4-account-create-update-hjwjt"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.543252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-combined-ca-bundle\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.543303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af7b247-ce6a-494a-97d5-1d21afcf7727-operator-scripts\") pod \"barbican-db-create-d4xbb\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.543344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-db-sync-config-data\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.543374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twnhf\" (UniqueName: \"kubernetes.io/projected/3df071dc-eb5b-40dd-85ea-430f44ab198f-kube-api-access-twnhf\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.543413 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-config-data\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.543517 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xjhh\" (UniqueName: \"kubernetes.io/projected/3af7b247-ce6a-494a-97d5-1d21afcf7727-kube-api-access-6xjhh\") pod \"barbican-db-create-d4xbb\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.546533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af7b247-ce6a-494a-97d5-1d21afcf7727-operator-scripts\") pod \"barbican-db-create-d4xbb\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.558032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-config-data\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.558389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-combined-ca-bundle\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.558687 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.564148 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-db-sync-config-data\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.583300 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xnnkw"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.584657 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.601063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xjhh\" (UniqueName: \"kubernetes.io/projected/3af7b247-ce6a-494a-97d5-1d21afcf7727-kube-api-access-6xjhh\") pod \"barbican-db-create-d4xbb\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.601726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twnhf\" (UniqueName: \"kubernetes.io/projected/3df071dc-eb5b-40dd-85ea-430f44ab198f-kube-api-access-twnhf\") pod \"watcher-db-sync-wchjj\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.610866 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xnnkw"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.621822 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wchjj" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.644792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbvcd\" (UniqueName: \"kubernetes.io/projected/200d49d1-840e-40d8-b347-f0603e5c5e40-kube-api-access-zbvcd\") pod \"barbican-10b4-account-create-update-hjwjt\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.645076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/200d49d1-840e-40d8-b347-f0603e5c5e40-operator-scripts\") pod \"barbican-10b4-account-create-update-hjwjt\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.644793 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6584j-config-kb89p" event={"ID":"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c","Type":"ContainerStarted","Data":"09628c29d1006556dae21ba0d262c060be8a21381d838c8b832ad82d685ae3e6"} Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.655779 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-q6v7w"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.657117 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.667700 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.667930 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.668277 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.668829 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fg8kt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.669051 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.679493 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-q6v7w"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.705186 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6584j-config-kb89p" podStartSLOduration=7.7051660349999995 podStartE2EDuration="7.705166035s" podCreationTimestamp="2026-01-27 14:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:24.69069669 +0000 UTC m=+1186.342021092" watchObservedRunningTime="2026-01-27 14:31:24.705166035 +0000 UTC m=+1186.356490437" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.724027 4833 generic.go:334] "Generic (PLEG): container finished" podID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerID="75005c2907a91543aa2a3dad9b9cf40eed33e480e5643780722a7a44e6933832" exitCode=0 Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.724118 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerDied","Data":"75005c2907a91543aa2a3dad9b9cf40eed33e480e5643780722a7a44e6933832"} Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.740297 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1cea-account-create-update-mdk7b"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.741660 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.746101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgq2\" (UniqueName: \"kubernetes.io/projected/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-kube-api-access-nlgq2\") pod \"neutron-db-create-xnnkw\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.746197 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-config-data\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.746870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-combined-ca-bundle\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.746978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbvcd\" (UniqueName: \"kubernetes.io/projected/200d49d1-840e-40d8-b347-f0603e5c5e40-kube-api-access-zbvcd\") pod \"barbican-10b4-account-create-update-hjwjt\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.747005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvt58\" (UniqueName: \"kubernetes.io/projected/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-kube-api-access-lvt58\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.747073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-operator-scripts\") pod \"neutron-db-create-xnnkw\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.747111 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/200d49d1-840e-40d8-b347-f0603e5c5e40-operator-scripts\") pod \"barbican-10b4-account-create-update-hjwjt\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.747721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/200d49d1-840e-40d8-b347-f0603e5c5e40-operator-scripts\") pod \"barbican-10b4-account-create-update-hjwjt\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.749513 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.767211 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1cea-account-create-update-mdk7b"] Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.777016 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbvcd\" (UniqueName: \"kubernetes.io/projected/200d49d1-840e-40d8-b347-f0603e5c5e40-kube-api-access-zbvcd\") pod \"barbican-10b4-account-create-update-hjwjt\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.799284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"6b873199a4b0e7e8c86d4e12d8e40fe7535db6033e0ffec9327d9ca9c95ff139"} Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.799322 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"5fe51c642ca9ee47a811b0f1553999a73f1d55e392c2371f3f25c958150ee0d5"} Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.802785 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qmw5s" event={"ID":"61ae6c77-5b30-4091-b78e-6ce66768de51","Type":"ContainerStarted","Data":"7560542a84023f545c1655f12135aed8852b69e3fff7781a0f97a663934a85c0"} Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.837106 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.838143 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.848799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-operator-scripts\") pod \"neutron-1cea-account-create-update-mdk7b\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.848860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgq2\" (UniqueName: \"kubernetes.io/projected/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-kube-api-access-nlgq2\") pod \"neutron-db-create-xnnkw\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.848893 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-config-data\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.848927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-combined-ca-bundle\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.848968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvt58\" (UniqueName: \"kubernetes.io/projected/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-kube-api-access-lvt58\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.849012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srs8n\" (UniqueName: \"kubernetes.io/projected/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-kube-api-access-srs8n\") pod \"neutron-1cea-account-create-update-mdk7b\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.849041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-operator-scripts\") pod \"neutron-db-create-xnnkw\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.850504 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-operator-scripts\") pod \"neutron-db-create-xnnkw\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.853671 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qmw5s" podStartSLOduration=10.85364708 podStartE2EDuration="10.85364708s" podCreationTimestamp="2026-01-27 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:24.817834425 +0000 UTC m=+1186.469158827" watchObservedRunningTime="2026-01-27 14:31:24.85364708 +0000 UTC m=+1186.504971482" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.858887 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-combined-ca-bundle\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.863129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-config-data\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.875948 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgq2\" (UniqueName: \"kubernetes.io/projected/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-kube-api-access-nlgq2\") pod \"neutron-db-create-xnnkw\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.882839 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvt58\" (UniqueName: \"kubernetes.io/projected/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-kube-api-access-lvt58\") pod \"keystone-db-sync-q6v7w\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-1\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950105 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-2\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950136 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-web-config\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950250 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950275 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config-out\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950304 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tdpw\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-kube-api-access-6tdpw\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950385 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-thanos-prometheus-http-client-file\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950404 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950440 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-0\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950492 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-tls-assets\") pod \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\" (UID: \"d81b605e-53b5-4bf9-9220-b4f5a37d2f70\") " Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950661 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-operator-scripts\") pod \"neutron-1cea-account-create-update-mdk7b\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950894 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srs8n\" (UniqueName: \"kubernetes.io/projected/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-kube-api-access-srs8n\") pod \"neutron-1cea-account-create-update-mdk7b\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.950949 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.951966 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.952783 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.952949 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-operator-scripts\") pod \"neutron-1cea-account-create-update-mdk7b\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.956804 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config-out" (OuterVolumeSpecName: "config-out") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.970482 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config" (OuterVolumeSpecName: "config") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4833]: I0127 14:31:24.984967 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srs8n\" (UniqueName: \"kubernetes.io/projected/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-kube-api-access-srs8n\") pod \"neutron-1cea-account-create-update-mdk7b\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.003367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-kube-api-access-6tdpw" (OuterVolumeSpecName: "kube-api-access-6tdpw") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "kube-api-access-6tdpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.004081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "pvc-fe2666da-d978-4526-ad6b-c7fb563ec194". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.005375 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.006276 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-web-config" (OuterVolumeSpecName: "web-config") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.008608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "d81b605e-53b5-4bf9-9220-b4f5a37d2f70" (UID: "d81b605e-53b5-4bf9-9220-b4f5a37d2f70"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052247 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052280 4833 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052320 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") on node \"crc\" " Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052332 4833 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052341 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tdpw\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-kube-api-access-6tdpw\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052354 4833 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052362 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052371 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.052382 4833 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d81b605e-53b5-4bf9-9220-b4f5a37d2f70-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.082858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.091001 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.106199 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.164726 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.164888 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fe2666da-d978-4526-ad6b-c7fb563ec194" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194") on node "crc" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.261615 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.305390 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fttzp"] Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.387564 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5915-account-create-update-wd9r6"] Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.653548 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d4xbb"] Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.730388 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-wchjj"] Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.861941 4833 generic.go:334] "Generic (PLEG): container finished" podID="61ae6c77-5b30-4091-b78e-6ce66768de51" containerID="7560542a84023f545c1655f12135aed8852b69e3fff7781a0f97a663934a85c0" exitCode=0 Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.862370 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qmw5s" event={"ID":"61ae6c77-5b30-4091-b78e-6ce66768de51","Type":"ContainerDied","Data":"7560542a84023f545c1655f12135aed8852b69e3fff7781a0f97a663934a85c0"} Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.873598 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5915-account-create-update-wd9r6" event={"ID":"2c4cf139-86d7-47e5-aba6-ae965bc89ed8","Type":"ContainerStarted","Data":"8394515f43949fe623d0311c715c4b3eb884a6163d1df6c8079452827a6774c1"} Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.880226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wchjj" event={"ID":"3df071dc-eb5b-40dd-85ea-430f44ab198f","Type":"ContainerStarted","Data":"f26ab8c81dd269e293670db877255e28bbfdd9cc7897be3d8946215669a43bcb"} Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.884555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4xbb" event={"ID":"3af7b247-ce6a-494a-97d5-1d21afcf7727","Type":"ContainerStarted","Data":"ad8c2cd90f88bf2389039d22f689276ec8b7aa152906114f5c5b0254e0afd858"} Jan 27 14:31:25 crc kubenswrapper[4833]: E0127 14:31:25.884750 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd81b605e_53b5_4bf9_9220_b4f5a37d2f70.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61ae6c77_5b30_4091_b78e_6ce66768de51.slice/crio-conmon-7560542a84023f545c1655f12135aed8852b69e3fff7781a0f97a663934a85c0.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.902379 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.902913 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d81b605e-53b5-4bf9-9220-b4f5a37d2f70","Type":"ContainerDied","Data":"b727daeb7d7be2e39649beb0b8e5f9598b930ec3499b760a63cd68db139076b0"} Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.903001 4833 scope.go:117] "RemoveContainer" containerID="44459fc8bdb87c753a7a512c9010e64d2f4e66a386367e7f526ec47fdbb94134" Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.907155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fttzp" event={"ID":"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff","Type":"ContainerStarted","Data":"ff2f3f3ca32696bb130fcb63792787d95f188f9dd8c590e428bed4909d57d02a"} Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.911964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"ca16f9716df53f771a1ce0debb89e9e3ef9a279cd6781d55d512846c627e98df"} Jan 27 14:31:25 crc kubenswrapper[4833]: I0127 14:31:25.968316 4833 scope.go:117] "RemoveContainer" containerID="75005c2907a91543aa2a3dad9b9cf40eed33e480e5643780722a7a44e6933832" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.018111 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.027427 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.052625 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-10b4-account-create-update-hjwjt"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.064905 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xnnkw"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.069869 4833 scope.go:117] "RemoveContainer" containerID="21b7ee68b64ac1e968393a57f80245091770e7cfa784ac7779fbe4e16f96c367" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.086461 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4833]: E0127 14:31:26.086845 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="init-config-reloader" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.086857 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="init-config-reloader" Jan 27 14:31:26 crc kubenswrapper[4833]: E0127 14:31:26.086866 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="prometheus" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.086872 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="prometheus" Jan 27 14:31:26 crc kubenswrapper[4833]: E0127 14:31:26.086880 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="thanos-sidecar" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.086887 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="thanos-sidecar" Jan 27 14:31:26 crc kubenswrapper[4833]: E0127 14:31:26.086898 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="config-reloader" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.086904 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="config-reloader" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.087088 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="config-reloader" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.087109 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="prometheus" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.087126 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" containerName="thanos-sidecar" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.088633 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.096097 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.096277 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.096382 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-r6fq9" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.096512 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.098411 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.101233 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.101703 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.104879 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.106078 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.109768 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.157561 4833 scope.go:117] "RemoveContainer" containerID="6cde151894058c02535de96fe019040449253d19f1811476686b454d175a3315" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.179250 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-q6v7w"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9498c3c5-aa3d-400f-9970-7aa3388688a3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193111 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193177 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193202 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193223 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9kpn\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-kube-api-access-p9kpn\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-config\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193297 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193341 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.193581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.202384 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1cea-account-create-update-mdk7b"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.294978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295107 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295145 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9498c3c5-aa3d-400f-9970-7aa3388688a3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295217 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295313 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9kpn\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-kube-api-access-p9kpn\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-config\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.295966 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.297087 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.298396 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.305247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.305301 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.305765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.306244 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9498c3c5-aa3d-400f-9970-7aa3388688a3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.307342 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.307727 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.307737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-config\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.312337 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.313075 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.313112 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae037006a250a748df6e15e9e2e300ecef710dd9481d48fc8efc4ea8fd9ab428/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.317184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9kpn\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-kube-api-access-p9kpn\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.391040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.421404 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.787758 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6584j" Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.971072 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.994170 4833 generic.go:334] "Generic (PLEG): container finished" podID="ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" containerID="09628c29d1006556dae21ba0d262c060be8a21381d838c8b832ad82d685ae3e6" exitCode=0 Jan 27 14:31:26 crc kubenswrapper[4833]: I0127 14:31:26.994275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6584j-config-kb89p" event={"ID":"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c","Type":"ContainerDied","Data":"09628c29d1006556dae21ba0d262c060be8a21381d838c8b832ad82d685ae3e6"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.028688 4833 generic.go:334] "Generic (PLEG): container finished" podID="02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" containerID="d7a5df6ff12967ff88e012505674edeb5aae81e2dd290644b249b89dd6e6827c" exitCode=0 Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.028793 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fttzp" event={"ID":"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff","Type":"ContainerDied","Data":"d7a5df6ff12967ff88e012505674edeb5aae81e2dd290644b249b89dd6e6827c"} Jan 27 14:31:27 crc kubenswrapper[4833]: W0127 14:31:27.029771 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9498c3c5_aa3d_400f_9970_7aa3388688a3.slice/crio-f8f00dae01bab0a1fa6b804680d249ffae0492e3c641febe89f8c29149e8d636 WatchSource:0}: Error finding container f8f00dae01bab0a1fa6b804680d249ffae0492e3c641febe89f8c29149e8d636: Status 404 returned error can't find the container with id f8f00dae01bab0a1fa6b804680d249ffae0492e3c641febe89f8c29149e8d636 Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.073733 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xnnkw" event={"ID":"dca7faff-15ba-4ec9-b034-8149ff5d4fd4","Type":"ContainerStarted","Data":"e66e256ccdd51a9fce43ee37229423be740387897b4d5fdfcce5c34e0fffe1d4"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.073774 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xnnkw" event={"ID":"dca7faff-15ba-4ec9-b034-8149ff5d4fd4","Type":"ContainerStarted","Data":"3c199586f5c3bc5bd8d01ec37dce501b2aaa07a3137858db111d242b73610a4f"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.080838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5915-account-create-update-wd9r6" event={"ID":"2c4cf139-86d7-47e5-aba6-ae965bc89ed8","Type":"ContainerStarted","Data":"1eb706d0dea0795b43a1bcd763982f478e393522eefbf40f969c257db070f138"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.093545 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-10b4-account-create-update-hjwjt" event={"ID":"200d49d1-840e-40d8-b347-f0603e5c5e40","Type":"ContainerStarted","Data":"36ae9dc88c2276ca5b16d737f2a22af2c41f5181acbd5ea170dc751bef76ff98"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.093733 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-10b4-account-create-update-hjwjt" event={"ID":"200d49d1-840e-40d8-b347-f0603e5c5e40","Type":"ContainerStarted","Data":"29cf6b969fcee24c3cd7f2f6fe7395723e1464eb4cbccb449856068b105023c6"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.099965 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-xnnkw" podStartSLOduration=3.099949996 podStartE2EDuration="3.099949996s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:27.09550563 +0000 UTC m=+1188.746830032" watchObservedRunningTime="2026-01-27 14:31:27.099949996 +0000 UTC m=+1188.751274388" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.100340 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q6v7w" event={"ID":"3cb117e6-e4b5-4577-9af1-c3b385c4f23d","Type":"ContainerStarted","Data":"42c5fe84c8fdc70ac4e7c9c8cb92ee49ad48fc122cf12071f9e00abafb1d3dc7"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.118863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1cea-account-create-update-mdk7b" event={"ID":"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7","Type":"ContainerStarted","Data":"8bfd63e926ead44687b1817e5823aa6ea5463dc9d9351b772cb16edcd7b17bfd"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.119110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1cea-account-create-update-mdk7b" event={"ID":"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7","Type":"ContainerStarted","Data":"7477cd5ea694f7afb91a46cd00b605157731a2a071eb48c0cef58f2cab7d4a38"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.122592 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-10b4-account-create-update-hjwjt" podStartSLOduration=3.122572716 podStartE2EDuration="3.122572716s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:27.114999545 +0000 UTC m=+1188.766323967" watchObservedRunningTime="2026-01-27 14:31:27.122572716 +0000 UTC m=+1188.773897118" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.149311 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5915-account-create-update-wd9r6" podStartSLOduration=3.149293764 podStartE2EDuration="3.149293764s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:27.142794979 +0000 UTC m=+1188.794119381" watchObservedRunningTime="2026-01-27 14:31:27.149293764 +0000 UTC m=+1188.800618156" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.159546 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-1cea-account-create-update-mdk7b" podStartSLOduration=3.159528408 podStartE2EDuration="3.159528408s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:27.157741476 +0000 UTC m=+1188.809065878" watchObservedRunningTime="2026-01-27 14:31:27.159528408 +0000 UTC m=+1188.810852810" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.164021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"75cbe202f8562a94b2c8076ac7dc81d578fc1b7c6138be78b859b5745ad868a0"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.166949 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4xbb" event={"ID":"3af7b247-ce6a-494a-97d5-1d21afcf7727","Type":"ContainerStarted","Data":"249645aaf172f80e19446a27b2161e968bb22165b8a94835fb8438b8a8ee2644"} Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.199960 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-d4xbb" podStartSLOduration=3.199935413 podStartE2EDuration="3.199935413s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:27.183715866 +0000 UTC m=+1188.835040258" watchObservedRunningTime="2026-01-27 14:31:27.199935413 +0000 UTC m=+1188.851259815" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.222925 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81b605e-53b5-4bf9-9220-b4f5a37d2f70" path="/var/lib/kubelet/pods/d81b605e-53b5-4bf9-9220-b4f5a37d2f70/volumes" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.542192 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.643517 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ae6c77-5b30-4091-b78e-6ce66768de51-operator-scripts\") pod \"61ae6c77-5b30-4091-b78e-6ce66768de51\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.644064 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4b26\" (UniqueName: \"kubernetes.io/projected/61ae6c77-5b30-4091-b78e-6ce66768de51-kube-api-access-b4b26\") pod \"61ae6c77-5b30-4091-b78e-6ce66768de51\" (UID: \"61ae6c77-5b30-4091-b78e-6ce66768de51\") " Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.644436 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ae6c77-5b30-4091-b78e-6ce66768de51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61ae6c77-5b30-4091-b78e-6ce66768de51" (UID: "61ae6c77-5b30-4091-b78e-6ce66768de51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.644630 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ae6c77-5b30-4091-b78e-6ce66768de51-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.665740 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ae6c77-5b30-4091-b78e-6ce66768de51-kube-api-access-b4b26" (OuterVolumeSpecName: "kube-api-access-b4b26") pod "61ae6c77-5b30-4091-b78e-6ce66768de51" (UID: "61ae6c77-5b30-4091-b78e-6ce66768de51"). InnerVolumeSpecName "kube-api-access-b4b26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:27 crc kubenswrapper[4833]: I0127 14:31:27.748760 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4b26\" (UniqueName: \"kubernetes.io/projected/61ae6c77-5b30-4091-b78e-6ce66768de51-kube-api-access-b4b26\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.183131 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qmw5s" event={"ID":"61ae6c77-5b30-4091-b78e-6ce66768de51","Type":"ContainerDied","Data":"0d2cb281234ed518fd0f6eff1adb681107bde3636af178e0b42b3d7c1375e6a8"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.183608 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2cb281234ed518fd0f6eff1adb681107bde3636af178e0b42b3d7c1375e6a8" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.183569 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qmw5s" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.193162 4833 generic.go:334] "Generic (PLEG): container finished" podID="2c4cf139-86d7-47e5-aba6-ae965bc89ed8" containerID="1eb706d0dea0795b43a1bcd763982f478e393522eefbf40f969c257db070f138" exitCode=0 Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.193264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5915-account-create-update-wd9r6" event={"ID":"2c4cf139-86d7-47e5-aba6-ae965bc89ed8","Type":"ContainerDied","Data":"1eb706d0dea0795b43a1bcd763982f478e393522eefbf40f969c257db070f138"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.196337 4833 generic.go:334] "Generic (PLEG): container finished" podID="200d49d1-840e-40d8-b347-f0603e5c5e40" containerID="36ae9dc88c2276ca5b16d737f2a22af2c41f5181acbd5ea170dc751bef76ff98" exitCode=0 Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.196478 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-10b4-account-create-update-hjwjt" event={"ID":"200d49d1-840e-40d8-b347-f0603e5c5e40","Type":"ContainerDied","Data":"36ae9dc88c2276ca5b16d737f2a22af2c41f5181acbd5ea170dc751bef76ff98"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.202100 4833 generic.go:334] "Generic (PLEG): container finished" podID="3af7b247-ce6a-494a-97d5-1d21afcf7727" containerID="249645aaf172f80e19446a27b2161e968bb22165b8a94835fb8438b8a8ee2644" exitCode=0 Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.202158 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4xbb" event={"ID":"3af7b247-ce6a-494a-97d5-1d21afcf7727","Type":"ContainerDied","Data":"249645aaf172f80e19446a27b2161e968bb22165b8a94835fb8438b8a8ee2644"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.204757 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" containerID="8bfd63e926ead44687b1817e5823aa6ea5463dc9d9351b772cb16edcd7b17bfd" exitCode=0 Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.204806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1cea-account-create-update-mdk7b" event={"ID":"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7","Type":"ContainerDied","Data":"8bfd63e926ead44687b1817e5823aa6ea5463dc9d9351b772cb16edcd7b17bfd"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.219314 4833 generic.go:334] "Generic (PLEG): container finished" podID="dca7faff-15ba-4ec9-b034-8149ff5d4fd4" containerID="e66e256ccdd51a9fce43ee37229423be740387897b4d5fdfcce5c34e0fffe1d4" exitCode=0 Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.219526 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xnnkw" event={"ID":"dca7faff-15ba-4ec9-b034-8149ff5d4fd4","Type":"ContainerDied","Data":"e66e256ccdd51a9fce43ee37229423be740387897b4d5fdfcce5c34e0fffe1d4"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.222122 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerStarted","Data":"f8f00dae01bab0a1fa6b804680d249ffae0492e3c641febe89f8c29149e8d636"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.269059 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"f5edf54c650edea3e3c8e33eedeee9f0272244bbad03d33f712963a463b9a205"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.269113 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"eb3bfccec533a6e8d1f4d1309a4568343cbd595fc803eb2f10973bfe7f5d8755"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.269128 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df43a2ef-c36c-4b08-bee6-6820e443220c","Type":"ContainerStarted","Data":"6601aac4e084bad29351d68c13cef6e1b282b22767f5f9cda9b10aca953091b1"} Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.785868 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.790835 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.900997 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run\") pod \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901156 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run-ovn\") pod \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901190 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-additional-scripts\") pod \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901215 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-scripts\") pod \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901257 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxqh2\" (UniqueName: \"kubernetes.io/projected/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-kube-api-access-xxqh2\") pod \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run" (OuterVolumeSpecName: "var-run") pod "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" (UID: "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" (UID: "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2plcw\" (UniqueName: \"kubernetes.io/projected/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-kube-api-access-2plcw\") pod \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901475 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-operator-scripts\") pod \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\" (UID: \"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901589 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-log-ovn\") pod \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\" (UID: \"ac38b3a3-8683-4ca3-b649-6c4b8e11a12c\") " Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.901652 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" (UID: "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.902110 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" (UID: "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.902420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" (UID: "02d74d9d-a60f-4a0b-a3a8-55e91415f8ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.902847 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.902984 4833 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.903004 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.903018 4833 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.903031 4833 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:28 crc kubenswrapper[4833]: I0127 14:31:28.904151 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-scripts" (OuterVolumeSpecName: "scripts") pod "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" (UID: "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.005093 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.010812 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-kube-api-access-2plcw" (OuterVolumeSpecName: "kube-api-access-2plcw") pod "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" (UID: "ac38b3a3-8683-4ca3-b649-6c4b8e11a12c"). InnerVolumeSpecName "kube-api-access-2plcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.010850 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-kube-api-access-xxqh2" (OuterVolumeSpecName: "kube-api-access-xxqh2") pod "02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" (UID: "02d74d9d-a60f-4a0b-a3a8-55e91415f8ff"). InnerVolumeSpecName "kube-api-access-xxqh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.106859 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxqh2\" (UniqueName: \"kubernetes.io/projected/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff-kube-api-access-xxqh2\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.106891 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2plcw\" (UniqueName: \"kubernetes.io/projected/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c-kube-api-access-2plcw\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.126408 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6584j-config-kb89p"] Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.137626 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6584j-config-kb89p"] Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.227640 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" path="/var/lib/kubelet/pods/ac38b3a3-8683-4ca3-b649-6c4b8e11a12c/volumes" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.283302 4833 scope.go:117] "RemoveContainer" containerID="09628c29d1006556dae21ba0d262c060be8a21381d838c8b832ad82d685ae3e6" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.283416 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6584j-config-kb89p" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.288915 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fttzp" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.288916 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fttzp" event={"ID":"02d74d9d-a60f-4a0b-a3a8-55e91415f8ff","Type":"ContainerDied","Data":"ff2f3f3ca32696bb130fcb63792787d95f188f9dd8c590e428bed4909d57d02a"} Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.288962 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2f3f3ca32696bb130fcb63792787d95f188f9dd8c590e428bed4909d57d02a" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.351666 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=24.469316109 podStartE2EDuration="41.351638641s" podCreationTimestamp="2026-01-27 14:30:48 +0000 UTC" firstStartedPulling="2026-01-27 14:31:06.662022591 +0000 UTC m=+1168.313346993" lastFinishedPulling="2026-01-27 14:31:23.544345133 +0000 UTC m=+1185.195669525" observedRunningTime="2026-01-27 14:31:29.332835791 +0000 UTC m=+1190.984160213" watchObservedRunningTime="2026-01-27 14:31:29.351638641 +0000 UTC m=+1191.002963063" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649170 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lz2sr"] Jan 27 14:31:29 crc kubenswrapper[4833]: E0127 14:31:29.649576 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" containerName="ovn-config" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649588 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" containerName="ovn-config" Jan 27 14:31:29 crc kubenswrapper[4833]: E0127 14:31:29.649608 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" containerName="mariadb-database-create" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649614 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" containerName="mariadb-database-create" Jan 27 14:31:29 crc kubenswrapper[4833]: E0127 14:31:29.649633 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ae6c77-5b30-4091-b78e-6ce66768de51" containerName="mariadb-account-create-update" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649639 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ae6c77-5b30-4091-b78e-6ce66768de51" containerName="mariadb-account-create-update" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649791 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" containerName="mariadb-database-create" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649804 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac38b3a3-8683-4ca3-b649-6c4b8e11a12c" containerName="ovn-config" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.649817 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ae6c77-5b30-4091-b78e-6ce66768de51" containerName="mariadb-account-create-update" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.650701 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.654817 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.667972 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.675199 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lz2sr"] Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-operator-scripts\") pod \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718251 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srs8n\" (UniqueName: \"kubernetes.io/projected/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-kube-api-access-srs8n\") pod \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\" (UID: \"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7\") " Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" (UID: "4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718712 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718788 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-config\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718862 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6d5l\" (UniqueName: \"kubernetes.io/projected/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-kube-api-access-l6d5l\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718913 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.718954 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.719043 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.725736 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-kube-api-access-srs8n" (OuterVolumeSpecName: "kube-api-access-srs8n") pod "4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" (UID: "4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7"). InnerVolumeSpecName "kube-api-access-srs8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820057 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-config\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820086 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6d5l\" (UniqueName: \"kubernetes.io/projected/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-kube-api-access-l6d5l\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.820283 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srs8n\" (UniqueName: \"kubernetes.io/projected/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7-kube-api-access-srs8n\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.821099 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.822099 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.822615 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-config\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.823546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.824215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.842285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6d5l\" (UniqueName: \"kubernetes.io/projected/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-kube-api-access-l6d5l\") pod \"dnsmasq-dns-6d5b6d6b67-lz2sr\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.943867 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.965926 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.970766 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.971952 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:29 crc kubenswrapper[4833]: I0127 14:31:29.988566 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlgq2\" (UniqueName: \"kubernetes.io/projected/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-kube-api-access-nlgq2\") pod \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022055 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af7b247-ce6a-494a-97d5-1d21afcf7727-operator-scripts\") pod \"3af7b247-ce6a-494a-97d5-1d21afcf7727\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022149 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-operator-scripts\") pod \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022223 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xjhh\" (UniqueName: \"kubernetes.io/projected/3af7b247-ce6a-494a-97d5-1d21afcf7727-kube-api-access-6xjhh\") pod \"3af7b247-ce6a-494a-97d5-1d21afcf7727\" (UID: \"3af7b247-ce6a-494a-97d5-1d21afcf7727\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022247 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-operator-scripts\") pod \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\" (UID: \"dca7faff-15ba-4ec9-b034-8149ff5d4fd4\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrj8\" (UniqueName: \"kubernetes.io/projected/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-kube-api-access-snrj8\") pod \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\" (UID: \"2c4cf139-86d7-47e5-aba6-ae965bc89ed8\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbvcd\" (UniqueName: \"kubernetes.io/projected/200d49d1-840e-40d8-b347-f0603e5c5e40-kube-api-access-zbvcd\") pod \"200d49d1-840e-40d8-b347-f0603e5c5e40\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.022339 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/200d49d1-840e-40d8-b347-f0603e5c5e40-operator-scripts\") pod \"200d49d1-840e-40d8-b347-f0603e5c5e40\" (UID: \"200d49d1-840e-40d8-b347-f0603e5c5e40\") " Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.023302 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af7b247-ce6a-494a-97d5-1d21afcf7727-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3af7b247-ce6a-494a-97d5-1d21afcf7727" (UID: "3af7b247-ce6a-494a-97d5-1d21afcf7727"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.023520 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dca7faff-15ba-4ec9-b034-8149ff5d4fd4" (UID: "dca7faff-15ba-4ec9-b034-8149ff5d4fd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.023606 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c4cf139-86d7-47e5-aba6-ae965bc89ed8" (UID: "2c4cf139-86d7-47e5-aba6-ae965bc89ed8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.023911 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200d49d1-840e-40d8-b347-f0603e5c5e40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "200d49d1-840e-40d8-b347-f0603e5c5e40" (UID: "200d49d1-840e-40d8-b347-f0603e5c5e40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.026798 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200d49d1-840e-40d8-b347-f0603e5c5e40-kube-api-access-zbvcd" (OuterVolumeSpecName: "kube-api-access-zbvcd") pod "200d49d1-840e-40d8-b347-f0603e5c5e40" (UID: "200d49d1-840e-40d8-b347-f0603e5c5e40"). InnerVolumeSpecName "kube-api-access-zbvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.027167 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-kube-api-access-snrj8" (OuterVolumeSpecName: "kube-api-access-snrj8") pod "2c4cf139-86d7-47e5-aba6-ae965bc89ed8" (UID: "2c4cf139-86d7-47e5-aba6-ae965bc89ed8"). InnerVolumeSpecName "kube-api-access-snrj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.029809 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-kube-api-access-nlgq2" (OuterVolumeSpecName: "kube-api-access-nlgq2") pod "dca7faff-15ba-4ec9-b034-8149ff5d4fd4" (UID: "dca7faff-15ba-4ec9-b034-8149ff5d4fd4"). InnerVolumeSpecName "kube-api-access-nlgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.029889 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af7b247-ce6a-494a-97d5-1d21afcf7727-kube-api-access-6xjhh" (OuterVolumeSpecName: "kube-api-access-6xjhh") pod "3af7b247-ce6a-494a-97d5-1d21afcf7727" (UID: "3af7b247-ce6a-494a-97d5-1d21afcf7727"). InnerVolumeSpecName "kube-api-access-6xjhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.125799 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xjhh\" (UniqueName: \"kubernetes.io/projected/3af7b247-ce6a-494a-97d5-1d21afcf7727-kube-api-access-6xjhh\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126126 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126138 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrj8\" (UniqueName: \"kubernetes.io/projected/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-kube-api-access-snrj8\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126148 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbvcd\" (UniqueName: \"kubernetes.io/projected/200d49d1-840e-40d8-b347-f0603e5c5e40-kube-api-access-zbvcd\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126157 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/200d49d1-840e-40d8-b347-f0603e5c5e40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126165 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlgq2\" (UniqueName: \"kubernetes.io/projected/dca7faff-15ba-4ec9-b034-8149ff5d4fd4-kube-api-access-nlgq2\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126174 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3af7b247-ce6a-494a-97d5-1d21afcf7727-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.126182 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c4cf139-86d7-47e5-aba6-ae965bc89ed8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.305162 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerStarted","Data":"7c8a8be740230c85d90549b48f71c460aaf0d65223613733222524b2d00d9f7d"} Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.307919 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5915-account-create-update-wd9r6" event={"ID":"2c4cf139-86d7-47e5-aba6-ae965bc89ed8","Type":"ContainerDied","Data":"8394515f43949fe623d0311c715c4b3eb884a6163d1df6c8079452827a6774c1"} Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.307954 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8394515f43949fe623d0311c715c4b3eb884a6163d1df6c8079452827a6774c1" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.307966 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5915-account-create-update-wd9r6" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.309479 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-10b4-account-create-update-hjwjt" event={"ID":"200d49d1-840e-40d8-b347-f0603e5c5e40","Type":"ContainerDied","Data":"29cf6b969fcee24c3cd7f2f6fe7395723e1464eb4cbccb449856068b105023c6"} Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.309518 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29cf6b969fcee24c3cd7f2f6fe7395723e1464eb4cbccb449856068b105023c6" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.309583 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-10b4-account-create-update-hjwjt" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.312356 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d4xbb" event={"ID":"3af7b247-ce6a-494a-97d5-1d21afcf7727","Type":"ContainerDied","Data":"ad8c2cd90f88bf2389039d22f689276ec8b7aa152906114f5c5b0254e0afd858"} Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.312393 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8c2cd90f88bf2389039d22f689276ec8b7aa152906114f5c5b0254e0afd858" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.312484 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d4xbb" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.318360 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1cea-account-create-update-mdk7b" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.319107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1cea-account-create-update-mdk7b" event={"ID":"4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7","Type":"ContainerDied","Data":"7477cd5ea694f7afb91a46cd00b605157731a2a071eb48c0cef58f2cab7d4a38"} Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.319186 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7477cd5ea694f7afb91a46cd00b605157731a2a071eb48c0cef58f2cab7d4a38" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.339682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xnnkw" event={"ID":"dca7faff-15ba-4ec9-b034-8149ff5d4fd4","Type":"ContainerDied","Data":"3c199586f5c3bc5bd8d01ec37dce501b2aaa07a3137858db111d242b73610a4f"} Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.339720 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c199586f5c3bc5bd8d01ec37dce501b2aaa07a3137858db111d242b73610a4f" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.340885 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xnnkw" Jan 27 14:31:30 crc kubenswrapper[4833]: I0127 14:31:30.460281 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lz2sr"] Jan 27 14:31:32 crc kubenswrapper[4833]: I0127 14:31:32.260714 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:31:32 crc kubenswrapper[4833]: I0127 14:31:32.261055 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:31:32 crc kubenswrapper[4833]: I0127 14:31:32.261107 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:31:32 crc kubenswrapper[4833]: I0127 14:31:32.261876 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1805c559dece1ffe1bcc960333ee27cf010cdc7a9c45dfb4f0b8b1c23725f37b"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:31:32 crc kubenswrapper[4833]: I0127 14:31:32.261931 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://1805c559dece1ffe1bcc960333ee27cf010cdc7a9c45dfb4f0b8b1c23725f37b" gracePeriod=600 Jan 27 14:31:33 crc kubenswrapper[4833]: I0127 14:31:33.367278 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="1805c559dece1ffe1bcc960333ee27cf010cdc7a9c45dfb4f0b8b1c23725f37b" exitCode=0 Jan 27 14:31:33 crc kubenswrapper[4833]: I0127 14:31:33.367348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"1805c559dece1ffe1bcc960333ee27cf010cdc7a9c45dfb4f0b8b1c23725f37b"} Jan 27 14:31:33 crc kubenswrapper[4833]: I0127 14:31:33.367665 4833 scope.go:117] "RemoveContainer" containerID="40187fec1df7ede27c8027d4dc094283cfd4a32e465d547b8f9dfc102b7b849f" Jan 27 14:31:33 crc kubenswrapper[4833]: I0127 14:31:33.370160 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" event={"ID":"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8","Type":"ContainerStarted","Data":"974da70856f711cad2ca7e1471f6c07ccee0168fd0523128fa139220cfd079f5"} Jan 27 14:31:42 crc kubenswrapper[4833]: I0127 14:31:42.463051 4833 generic.go:334] "Generic (PLEG): container finished" podID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerID="7c8a8be740230c85d90549b48f71c460aaf0d65223613733222524b2d00d9f7d" exitCode=0 Jan 27 14:31:42 crc kubenswrapper[4833]: I0127 14:31:42.463136 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerDied","Data":"7c8a8be740230c85d90549b48f71c460aaf0d65223613733222524b2d00d9f7d"} Jan 27 14:31:44 crc kubenswrapper[4833]: I0127 14:31:44.484108 4833 generic.go:334] "Generic (PLEG): container finished" podID="69d497f9-964a-4818-9f39-09cf9a0f83fb" containerID="d43ced6ff834553886ed37456bb08f98e09f184342a44034177188e0c002abae" exitCode=0 Jan 27 14:31:44 crc kubenswrapper[4833]: I0127 14:31:44.484910 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-26wng" event={"ID":"69d497f9-964a-4818-9f39-09cf9a0f83fb","Type":"ContainerDied","Data":"d43ced6ff834553886ed37456bb08f98e09f184342a44034177188e0c002abae"} Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.554430 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-26wng" event={"ID":"69d497f9-964a-4818-9f39-09cf9a0f83fb","Type":"ContainerDied","Data":"a9fb0a247fd2468652cccac7d3ea7bebadae8119477daf0237cdaf6fc55a29b2"} Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.555143 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fb0a247fd2468652cccac7d3ea7bebadae8119477daf0237cdaf6fc55a29b2" Jan 27 14:31:50 crc kubenswrapper[4833]: E0127 14:31:50.593031 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 27 14:31:50 crc kubenswrapper[4833]: E0127 14:31:50.593086 4833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 27 14:31:50 crc kubenswrapper[4833]: E0127 14:31:50.593208 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.22:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twnhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-wchjj_openstack(3df071dc-eb5b-40dd-85ea-430f44ab198f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:31:50 crc kubenswrapper[4833]: E0127 14:31:50.594519 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-wchjj" podUID="3df071dc-eb5b-40dd-85ea-430f44ab198f" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.624114 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-26wng" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.727102 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc2jp\" (UniqueName: \"kubernetes.io/projected/69d497f9-964a-4818-9f39-09cf9a0f83fb-kube-api-access-rc2jp\") pod \"69d497f9-964a-4818-9f39-09cf9a0f83fb\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.727150 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-config-data\") pod \"69d497f9-964a-4818-9f39-09cf9a0f83fb\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.727214 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-combined-ca-bundle\") pod \"69d497f9-964a-4818-9f39-09cf9a0f83fb\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.727291 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-db-sync-config-data\") pod \"69d497f9-964a-4818-9f39-09cf9a0f83fb\" (UID: \"69d497f9-964a-4818-9f39-09cf9a0f83fb\") " Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.731714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "69d497f9-964a-4818-9f39-09cf9a0f83fb" (UID: "69d497f9-964a-4818-9f39-09cf9a0f83fb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.732720 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d497f9-964a-4818-9f39-09cf9a0f83fb-kube-api-access-rc2jp" (OuterVolumeSpecName: "kube-api-access-rc2jp") pod "69d497f9-964a-4818-9f39-09cf9a0f83fb" (UID: "69d497f9-964a-4818-9f39-09cf9a0f83fb"). InnerVolumeSpecName "kube-api-access-rc2jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.755169 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69d497f9-964a-4818-9f39-09cf9a0f83fb" (UID: "69d497f9-964a-4818-9f39-09cf9a0f83fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.774799 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-config-data" (OuterVolumeSpecName: "config-data") pod "69d497f9-964a-4818-9f39-09cf9a0f83fb" (UID: "69d497f9-964a-4818-9f39-09cf9a0f83fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.829618 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.829655 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.829667 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc2jp\" (UniqueName: \"kubernetes.io/projected/69d497f9-964a-4818-9f39-09cf9a0f83fb-kube-api-access-rc2jp\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:50 crc kubenswrapper[4833]: I0127 14:31:50.829680 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d497f9-964a-4818-9f39-09cf9a0f83fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.570918 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerStarted","Data":"26c8f3a910042695e917df206850fa5c7ac9165185789434f9ec79f7b3aaae7b"} Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.574089 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"c0206b6d2836f14765d1c04bb41bfbd60d766e4b0e2c4de5107dd75cf3400e10"} Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.576063 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q6v7w" event={"ID":"3cb117e6-e4b5-4577-9af1-c3b385c4f23d","Type":"ContainerStarted","Data":"1dcc72e263004f90472e17961cc5e35d3a9772dcdd013709387dcca812a62fd5"} Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.577616 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerID="a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83" exitCode=0 Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.577695 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-26wng" Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.577763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" event={"ID":"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8","Type":"ContainerDied","Data":"a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83"} Jan 27 14:31:51 crc kubenswrapper[4833]: E0127 14:31:51.579316 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-wchjj" podUID="3df071dc-eb5b-40dd-85ea-430f44ab198f" Jan 27 14:31:51 crc kubenswrapper[4833]: I0127 14:31:51.614525 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-q6v7w" podStartSLOduration=3.295923665 podStartE2EDuration="27.614508065s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="2026-01-27 14:31:26.234641089 +0000 UTC m=+1187.885965491" lastFinishedPulling="2026-01-27 14:31:50.553225489 +0000 UTC m=+1212.204549891" observedRunningTime="2026-01-27 14:31:51.611173686 +0000 UTC m=+1213.262498078" watchObservedRunningTime="2026-01-27 14:31:51.614508065 +0000 UTC m=+1213.265832467" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.087108 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lz2sr"] Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.123606 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-9z6h2"] Jan 27 14:31:52 crc kubenswrapper[4833]: E0127 14:31:52.123933 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b247-ce6a-494a-97d5-1d21afcf7727" containerName="mariadb-database-create" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.123949 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b247-ce6a-494a-97d5-1d21afcf7727" containerName="mariadb-database-create" Jan 27 14:31:52 crc kubenswrapper[4833]: E0127 14:31:52.123965 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca7faff-15ba-4ec9-b034-8149ff5d4fd4" containerName="mariadb-database-create" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.123972 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca7faff-15ba-4ec9-b034-8149ff5d4fd4" containerName="mariadb-database-create" Jan 27 14:31:52 crc kubenswrapper[4833]: E0127 14:31:52.123983 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.123989 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: E0127 14:31:52.124002 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4cf139-86d7-47e5-aba6-ae965bc89ed8" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124008 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4cf139-86d7-47e5-aba6-ae965bc89ed8" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: E0127 14:31:52.124020 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d497f9-964a-4818-9f39-09cf9a0f83fb" containerName="glance-db-sync" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124027 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d497f9-964a-4818-9f39-09cf9a0f83fb" containerName="glance-db-sync" Jan 27 14:31:52 crc kubenswrapper[4833]: E0127 14:31:52.124036 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200d49d1-840e-40d8-b347-f0603e5c5e40" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124042 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="200d49d1-840e-40d8-b347-f0603e5c5e40" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124194 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4cf139-86d7-47e5-aba6-ae965bc89ed8" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124214 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d497f9-964a-4818-9f39-09cf9a0f83fb" containerName="glance-db-sync" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124231 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca7faff-15ba-4ec9-b034-8149ff5d4fd4" containerName="mariadb-database-create" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124243 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="200d49d1-840e-40d8-b347-f0603e5c5e40" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124259 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b247-ce6a-494a-97d5-1d21afcf7727" containerName="mariadb-database-create" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.124274 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" containerName="mariadb-account-create-update" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.125431 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.140140 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-9z6h2"] Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.153813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.153930 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-svc\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.153976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.154021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.154101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-config\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.154163 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6tw\" (UniqueName: \"kubernetes.io/projected/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-kube-api-access-8c6tw\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.255844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.255922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-config\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.255985 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6tw\" (UniqueName: \"kubernetes.io/projected/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-kube-api-access-8c6tw\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.256016 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.256128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-svc\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.257051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-config\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.257055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.257115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.257051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.257211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-svc\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.257730 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.293371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6tw\" (UniqueName: \"kubernetes.io/projected/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-kube-api-access-8c6tw\") pod \"dnsmasq-dns-895cf5cf-9z6h2\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.449485 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.585698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" event={"ID":"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8","Type":"ContainerStarted","Data":"3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba"} Jan 27 14:31:52 crc kubenswrapper[4833]: I0127 14:31:52.608523 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" podStartSLOduration=23.608506735 podStartE2EDuration="23.608506735s" podCreationTimestamp="2026-01-27 14:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:52.604782986 +0000 UTC m=+1214.256107398" watchObservedRunningTime="2026-01-27 14:31:52.608506735 +0000 UTC m=+1214.259831137" Jan 27 14:31:53 crc kubenswrapper[4833]: I0127 14:31:52.911955 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-9z6h2"] Jan 27 14:31:53 crc kubenswrapper[4833]: I0127 14:31:53.594930 4833 generic.go:334] "Generic (PLEG): container finished" podID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerID="336731e008b79151351f37ce0443fe36b223377ea8e582be1adbac09ba5a836a" exitCode=0 Jan 27 14:31:53 crc kubenswrapper[4833]: I0127 14:31:53.595028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" event={"ID":"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97","Type":"ContainerDied","Data":"336731e008b79151351f37ce0443fe36b223377ea8e582be1adbac09ba5a836a"} Jan 27 14:31:53 crc kubenswrapper[4833]: I0127 14:31:53.595238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" event={"ID":"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97","Type":"ContainerStarted","Data":"b3bfe0c1b62e3503f344ac7d317aa0b38dc4cb04b1a1ebf9723ad9734133a483"} Jan 27 14:31:53 crc kubenswrapper[4833]: I0127 14:31:53.595585 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:53 crc kubenswrapper[4833]: I0127 14:31:53.595588 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerName="dnsmasq-dns" containerID="cri-o://3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba" gracePeriod=10 Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.108861 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.197161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-swift-storage-0\") pod \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.197205 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6d5l\" (UniqueName: \"kubernetes.io/projected/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-kube-api-access-l6d5l\") pod \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.197234 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-sb\") pod \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.197253 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-config\") pod \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.197315 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-svc\") pod \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.197342 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-nb\") pod \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\" (UID: \"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8\") " Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.203252 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-kube-api-access-l6d5l" (OuterVolumeSpecName: "kube-api-access-l6d5l") pod "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" (UID: "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8"). InnerVolumeSpecName "kube-api-access-l6d5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.244381 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-config" (OuterVolumeSpecName: "config") pod "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" (UID: "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.257023 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" (UID: "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.263694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" (UID: "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.268064 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" (UID: "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.268891 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" (UID: "6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.300050 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.300090 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6d5l\" (UniqueName: \"kubernetes.io/projected/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-kube-api-access-l6d5l\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.300105 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.300120 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.300133 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.300143 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.605511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" event={"ID":"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97","Type":"ContainerStarted","Data":"ce331e0ab8500d445fad17894f94427561c3e49624f31f6e5b0e7ebfe9800e2e"} Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.605641 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.607257 4833 generic.go:334] "Generic (PLEG): container finished" podID="3cb117e6-e4b5-4577-9af1-c3b385c4f23d" containerID="1dcc72e263004f90472e17961cc5e35d3a9772dcdd013709387dcca812a62fd5" exitCode=0 Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.607321 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q6v7w" event={"ID":"3cb117e6-e4b5-4577-9af1-c3b385c4f23d","Type":"ContainerDied","Data":"1dcc72e263004f90472e17961cc5e35d3a9772dcdd013709387dcca812a62fd5"} Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.609662 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerID="3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba" exitCode=0 Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.609712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" event={"ID":"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8","Type":"ContainerDied","Data":"3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba"} Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.609716 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.609761 4833 scope.go:117] "RemoveContainer" containerID="3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.609748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-lz2sr" event={"ID":"6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8","Type":"ContainerDied","Data":"974da70856f711cad2ca7e1471f6c07ccee0168fd0523128fa139220cfd079f5"} Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.634916 4833 scope.go:117] "RemoveContainer" containerID="a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.639561 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" podStartSLOduration=2.639537422 podStartE2EDuration="2.639537422s" podCreationTimestamp="2026-01-27 14:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:54.634511682 +0000 UTC m=+1216.285836094" watchObservedRunningTime="2026-01-27 14:31:54.639537422 +0000 UTC m=+1216.290861844" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.657806 4833 scope.go:117] "RemoveContainer" containerID="3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba" Jan 27 14:31:54 crc kubenswrapper[4833]: E0127 14:31:54.658267 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba\": container with ID starting with 3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba not found: ID does not exist" containerID="3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.658298 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba"} err="failed to get container status \"3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba\": rpc error: code = NotFound desc = could not find container \"3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba\": container with ID starting with 3d20389e7ccb2f445684d6fe0ed7c73e96a6d688e2704259c9ca6391337557ba not found: ID does not exist" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.658324 4833 scope.go:117] "RemoveContainer" containerID="a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83" Jan 27 14:31:54 crc kubenswrapper[4833]: E0127 14:31:54.658909 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83\": container with ID starting with a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83 not found: ID does not exist" containerID="a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.658948 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83"} err="failed to get container status \"a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83\": rpc error: code = NotFound desc = could not find container \"a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83\": container with ID starting with a12d369f85a6298561c20f8bd033300a1fd11656cacaf8051517e553e48a1a83 not found: ID does not exist" Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.720363 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lz2sr"] Jan 27 14:31:54 crc kubenswrapper[4833]: I0127 14:31:54.727947 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-lz2sr"] Jan 27 14:31:55 crc kubenswrapper[4833]: I0127 14:31:55.220164 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" path="/var/lib/kubelet/pods/6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8/volumes" Jan 27 14:31:55 crc kubenswrapper[4833]: I0127 14:31:55.960203 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.038572 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-combined-ca-bundle\") pod \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.038669 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-config-data\") pod \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.038810 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvt58\" (UniqueName: \"kubernetes.io/projected/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-kube-api-access-lvt58\") pod \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\" (UID: \"3cb117e6-e4b5-4577-9af1-c3b385c4f23d\") " Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.043981 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-kube-api-access-lvt58" (OuterVolumeSpecName: "kube-api-access-lvt58") pod "3cb117e6-e4b5-4577-9af1-c3b385c4f23d" (UID: "3cb117e6-e4b5-4577-9af1-c3b385c4f23d"). InnerVolumeSpecName "kube-api-access-lvt58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.084079 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cb117e6-e4b5-4577-9af1-c3b385c4f23d" (UID: "3cb117e6-e4b5-4577-9af1-c3b385c4f23d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.106222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-config-data" (OuterVolumeSpecName: "config-data") pod "3cb117e6-e4b5-4577-9af1-c3b385c4f23d" (UID: "3cb117e6-e4b5-4577-9af1-c3b385c4f23d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.141150 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvt58\" (UniqueName: \"kubernetes.io/projected/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-kube-api-access-lvt58\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.141326 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.141427 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cb117e6-e4b5-4577-9af1-c3b385c4f23d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.637774 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerStarted","Data":"fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa"} Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.638095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerStarted","Data":"52b7c750f2cee3c45c2e0ed7185f0e745182a58a8a9d2392c410103b133df09c"} Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.639790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-q6v7w" event={"ID":"3cb117e6-e4b5-4577-9af1-c3b385c4f23d","Type":"ContainerDied","Data":"42c5fe84c8fdc70ac4e7c9c8cb92ee49ad48fc122cf12071f9e00abafb1d3dc7"} Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.639815 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42c5fe84c8fdc70ac4e7c9c8cb92ee49ad48fc122cf12071f9e00abafb1d3dc7" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.639878 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-q6v7w" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.668372 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=30.668357866 podStartE2EDuration="30.668357866s" podCreationTimestamp="2026-01-27 14:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:56.666665566 +0000 UTC m=+1218.317989978" watchObservedRunningTime="2026-01-27 14:31:56.668357866 +0000 UTC m=+1218.319682268" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.905546 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-9z6h2"] Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.905782 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerName="dnsmasq-dns" containerID="cri-o://ce331e0ab8500d445fad17894f94427561c3e49624f31f6e5b0e7ebfe9800e2e" gracePeriod=10 Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.917507 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2j7w8"] Jan 27 14:31:56 crc kubenswrapper[4833]: E0127 14:31:56.917853 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerName="init" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.917870 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerName="init" Jan 27 14:31:56 crc kubenswrapper[4833]: E0127 14:31:56.917892 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerName="dnsmasq-dns" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.917899 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerName="dnsmasq-dns" Jan 27 14:31:56 crc kubenswrapper[4833]: E0127 14:31:56.917906 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb117e6-e4b5-4577-9af1-c3b385c4f23d" containerName="keystone-db-sync" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.917912 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb117e6-e4b5-4577-9af1-c3b385c4f23d" containerName="keystone-db-sync" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.918085 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8d15c8-3b09-4f7c-b6a3-e59a21e941d8" containerName="dnsmasq-dns" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.918105 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb117e6-e4b5-4577-9af1-c3b385c4f23d" containerName="keystone-db-sync" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.918715 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.924061 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.924240 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.924344 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fg8kt" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.924465 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.925020 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.925106 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2j7w8"] Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.943793 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-2jltb"] Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.945203 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.962143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-config-data\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.962266 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-combined-ca-bundle\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.962294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-scripts\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.962331 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-fernet-keys\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.962352 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6fl\" (UniqueName: \"kubernetes.io/projected/cffa0abf-c51c-480c-8650-0a90cad148ef-kube-api-access-wp6fl\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.962399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-credential-keys\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:56 crc kubenswrapper[4833]: I0127 14:31:56.992624 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-2jltb"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065564 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065619 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065720 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-combined-ca-bundle\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-scripts\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-config\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065804 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-fernet-keys\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp6fl\" (UniqueName: \"kubernetes.io/projected/cffa0abf-c51c-480c-8650-0a90cad148ef-kube-api-access-wp6fl\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-credential-keys\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065907 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqdh\" (UniqueName: \"kubernetes.io/projected/c7929103-eabc-4bee-9150-7b016dd29dc1-kube-api-access-ssqdh\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065938 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.065988 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-config-data\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.084873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-credential-keys\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.091160 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-fernet-keys\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.097674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp6fl\" (UniqueName: \"kubernetes.io/projected/cffa0abf-c51c-480c-8650-0a90cad148ef-kube-api-access-wp6fl\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.102145 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-scripts\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.107466 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-combined-ca-bundle\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.126575 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ht8p9"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.127980 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.136100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-config-data\") pod \"keystone-bootstrap-2j7w8\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.136794 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.137032 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9v4gg" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.137996 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.164147 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7544d4446c-rj8rn"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.165528 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178531 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-m5sv8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178705 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-config\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssqdh\" (UniqueName: \"kubernetes.io/projected/c7929103-eabc-4bee-9150-7b016dd29dc1-kube-api-access-ssqdh\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178863 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178910 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.179762 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.180205 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178723 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.180469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.178749 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.180739 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.181077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.188960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-config\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.190827 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ht8p9"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.237311 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssqdh\" (UniqueName: \"kubernetes.io/projected/c7929103-eabc-4bee-9150-7b016dd29dc1-kube-api-access-ssqdh\") pod \"dnsmasq-dns-6c9c9f998c-2jltb\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.248773 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.286209 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7544d4446c-rj8rn"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a306119-60bc-4953-ba3a-3e9a6ff99959-logs\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-scripts\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhpw\" (UniqueName: \"kubernetes.io/projected/a93af5fb-6812-4f30-9e89-e8c58b01a69e-kube-api-access-pdhpw\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287141 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-db-sync-config-data\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-config-data\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287223 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-config-data\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a306119-60bc-4953-ba3a-3e9a6ff99959-horizon-secret-key\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-combined-ca-bundle\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-scripts\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287288 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v69\" (UniqueName: \"kubernetes.io/projected/0a306119-60bc-4953-ba3a-3e9a6ff99959-kube-api-access-x9v69\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.287311 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93af5fb-6812-4f30-9e89-e8c58b01a69e-etc-machine-id\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.328174 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mtnxp"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.329494 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.351084 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.351265 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bthsg" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.351365 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.361070 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mtnxp"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.370157 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-config-data\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-config-data\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392759 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a306119-60bc-4953-ba3a-3e9a6ff99959-horizon-secret-key\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-combined-ca-bundle\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-scripts\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9v69\" (UniqueName: \"kubernetes.io/projected/0a306119-60bc-4953-ba3a-3e9a6ff99959-kube-api-access-x9v69\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392869 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93af5fb-6812-4f30-9e89-e8c58b01a69e-etc-machine-id\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392890 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a306119-60bc-4953-ba3a-3e9a6ff99959-logs\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392911 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-scripts\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392926 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhpw\" (UniqueName: \"kubernetes.io/projected/a93af5fb-6812-4f30-9e89-e8c58b01a69e-kube-api-access-pdhpw\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.392989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-db-sync-config-data\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.393951 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a306119-60bc-4953-ba3a-3e9a6ff99959-logs\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.394565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93af5fb-6812-4f30-9e89-e8c58b01a69e-etc-machine-id\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.395356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-scripts\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.395843 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-config-data\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.401014 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-combined-ca-bundle\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.424975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-db-sync-config-data\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.427977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-scripts\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.430926 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-config-data\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.431304 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a306119-60bc-4953-ba3a-3e9a6ff99959-horizon-secret-key\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.448914 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9v69\" (UniqueName: \"kubernetes.io/projected/0a306119-60bc-4953-ba3a-3e9a6ff99959-kube-api-access-x9v69\") pod \"horizon-7544d4446c-rj8rn\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.455135 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhpw\" (UniqueName: \"kubernetes.io/projected/a93af5fb-6812-4f30-9e89-e8c58b01a69e-kube-api-access-pdhpw\") pod \"cinder-db-sync-ht8p9\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.472307 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6dqbw"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.473658 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.496629 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6nldk" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.496787 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.496987 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.502645 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kn4z\" (UniqueName: \"kubernetes.io/projected/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-kube-api-access-9kn4z\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.502693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-combined-ca-bundle\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.502737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-config\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.502977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.503519 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cjff8"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.504556 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.535933 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-79qzt" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.536119 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.591194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.605740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kn4z\" (UniqueName: \"kubernetes.io/projected/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-kube-api-access-9kn4z\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.605828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-combined-ca-bundle\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.605861 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbgg\" (UniqueName: \"kubernetes.io/projected/32678252-925f-4f5c-9602-5409032b6063-kube-api-access-4rbgg\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.605906 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-combined-ca-bundle\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.605938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-combined-ca-bundle\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.606010 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-config\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.606056 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-db-sync-config-data\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.606121 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnbb4\" (UniqueName: \"kubernetes.io/projected/a324f832-7082-443a-87c7-3cef46ebe7ea-kube-api-access-pnbb4\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.606177 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-config-data\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.606192 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a324f832-7082-443a-87c7-3cef46ebe7ea-logs\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.606262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-scripts\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.621078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-config\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.652095 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6dqbw"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.658292 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-combined-ca-bundle\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.742215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-db-sync-config-data\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.742296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnbb4\" (UniqueName: \"kubernetes.io/projected/a324f832-7082-443a-87c7-3cef46ebe7ea-kube-api-access-pnbb4\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.742321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a324f832-7082-443a-87c7-3cef46ebe7ea-logs\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.742343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-config-data\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.742387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-scripts\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.745174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a324f832-7082-443a-87c7-3cef46ebe7ea-logs\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.762143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-combined-ca-bundle\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.762180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbgg\" (UniqueName: \"kubernetes.io/projected/32678252-925f-4f5c-9602-5409032b6063-kube-api-access-4rbgg\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.762209 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-combined-ca-bundle\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.773470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnbb4\" (UniqueName: \"kubernetes.io/projected/a324f832-7082-443a-87c7-3cef46ebe7ea-kube-api-access-pnbb4\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.773593 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cjff8"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.775153 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kn4z\" (UniqueName: \"kubernetes.io/projected/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-kube-api-access-9kn4z\") pod \"neutron-db-sync-mtnxp\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.775992 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-db-sync-config-data\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.779494 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-combined-ca-bundle\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.794619 4833 generic.go:334] "Generic (PLEG): container finished" podID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerID="ce331e0ab8500d445fad17894f94427561c3e49624f31f6e5b0e7ebfe9800e2e" exitCode=0 Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.794636 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-config-data\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.794725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" event={"ID":"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97","Type":"ContainerDied","Data":"ce331e0ab8500d445fad17894f94427561c3e49624f31f6e5b0e7ebfe9800e2e"} Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.804764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-scripts\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.811421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-combined-ca-bundle\") pod \"placement-db-sync-6dqbw\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.831118 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.832718 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.833879 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbgg\" (UniqueName: \"kubernetes.io/projected/32678252-925f-4f5c-9602-5409032b6063-kube-api-access-4rbgg\") pod \"barbican-db-sync-cjff8\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.835395 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6dqbw" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.842992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.843245 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.843355 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.843516 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7fbqk" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.932089 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cjff8" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.938952 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-2jltb"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.948916 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.962546 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f4f484fb9-t4v96"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.964359 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966249 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-logs\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966316 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966345 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966547 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.966592 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj2l4\" (UniqueName: \"kubernetes.io/projected/4b00b686-dbb3-4ed2-a893-82aa39224575-kube-api-access-tj2l4\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.971003 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f4f484fb9-t4v96"] Jan 27 14:31:57 crc kubenswrapper[4833]: I0127 14:31:57.974562 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.005209 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vsdp5"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.009615 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.044765 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vsdp5"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069520 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj2l4\" (UniqueName: \"kubernetes.io/projected/4b00b686-dbb3-4ed2-a893-82aa39224575-kube-api-access-tj2l4\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069579 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62584aaf-3f85-4383-8da4-ea3366edf67c-horizon-secret-key\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069608 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-logs\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069626 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069641 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069657 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqt9w\" (UniqueName: \"kubernetes.io/projected/62584aaf-3f85-4383-8da4-ea3366edf67c-kube-api-access-jqt9w\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069715 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069770 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-scripts\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdjq\" (UniqueName: \"kubernetes.io/projected/10bb060d-4709-441d-859b-65bf70812174-kube-api-access-dxdjq\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069833 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-config\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-config-data\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069923 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62584aaf-3f85-4383-8da4-ea3366edf67c-logs\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.069959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.070550 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.071556 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-logs\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.072673 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.073398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.076165 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.076331 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.077227 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.077391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.078035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.084818 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.088913 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.099032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj2l4\" (UniqueName: \"kubernetes.io/projected/4b00b686-dbb3-4ed2-a893-82aa39224575-kube-api-access-tj2l4\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.101664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.130751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.172817 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-log-httpd\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.172914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-scripts\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.172952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-config-data\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.172988 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62584aaf-3f85-4383-8da4-ea3366edf67c-horizon-secret-key\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173024 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqt9w\" (UniqueName: \"kubernetes.io/projected/62584aaf-3f85-4383-8da4-ea3366edf67c-kube-api-access-jqt9w\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173047 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-scripts\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxdjq\" (UniqueName: \"kubernetes.io/projected/10bb060d-4709-441d-859b-65bf70812174-kube-api-access-dxdjq\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-config\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173177 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-run-httpd\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173240 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-config-data\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxw5\" (UniqueName: \"kubernetes.io/projected/53234ba1-9a64-40ba-b483-26a9174669bd-kube-api-access-lsxw5\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.173305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62584aaf-3f85-4383-8da4-ea3366edf67c-logs\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.176081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-config-data\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.177034 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.177486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.177811 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-config\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.178267 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.178765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.179465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-scripts\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.184021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62584aaf-3f85-4383-8da4-ea3366edf67c-horizon-secret-key\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.193766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62584aaf-3f85-4383-8da4-ea3366edf67c-logs\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.197091 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqt9w\" (UniqueName: \"kubernetes.io/projected/62584aaf-3f85-4383-8da4-ea3366edf67c-kube-api-access-jqt9w\") pod \"horizon-6f4f484fb9-t4v96\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.197293 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxdjq\" (UniqueName: \"kubernetes.io/projected/10bb060d-4709-441d-859b-65bf70812174-kube-api-access-dxdjq\") pod \"dnsmasq-dns-57c957c4ff-vsdp5\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.250641 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.263380 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.273020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-config-data\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274248 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274298 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-run-httpd\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxw5\" (UniqueName: \"kubernetes.io/projected/53234ba1-9a64-40ba-b483-26a9174669bd-kube-api-access-lsxw5\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-log-httpd\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.274421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-scripts\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.275500 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-log-httpd\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.277529 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-run-httpd\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.282313 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-scripts\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.282531 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.282983 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.283247 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.285876 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.285904 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.289328 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-config-data\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.315799 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxw5\" (UniqueName: \"kubernetes.io/projected/53234ba1-9a64-40ba-b483-26a9174669bd-kube-api-access-lsxw5\") pod \"ceilometer-0\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377394 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377474 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvd5\" (UniqueName: \"kubernetes.io/projected/6d1b347e-564b-4691-b115-5395f6b335cf-kube-api-access-qbvd5\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377812 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.377938 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.378027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.398138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.427410 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2j7w8"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.452346 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484636 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484665 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvd5\" (UniqueName: \"kubernetes.io/projected/6d1b347e-564b-4691-b115-5395f6b335cf-kube-api-access-qbvd5\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484818 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484883 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.488434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.489845 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.484584 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.495312 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.501028 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-2jltb"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.502254 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.503510 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.508258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.529921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvd5\" (UniqueName: \"kubernetes.io/projected/6d1b347e-564b-4691-b115-5395f6b335cf-kube-api-access-qbvd5\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.532656 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:58 crc kubenswrapper[4833]: W0127 14:31:58.533062 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7929103_eabc_4bee_9150_7b016dd29dc1.slice/crio-79c1983bc0165aa35b96e958fa49b2aa088fb9d597a24f4becb84338a98a0571 WatchSource:0}: Error finding container 79c1983bc0165aa35b96e958fa49b2aa088fb9d597a24f4becb84338a98a0571: Status 404 returned error can't find the container with id 79c1983bc0165aa35b96e958fa49b2aa088fb9d597a24f4becb84338a98a0571 Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.548186 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.698056 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-nb\") pod \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.698118 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-config\") pod \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.698188 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c6tw\" (UniqueName: \"kubernetes.io/projected/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-kube-api-access-8c6tw\") pod \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.698214 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-svc\") pod \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.698269 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-sb\") pod \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.698374 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-swift-storage-0\") pod \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\" (UID: \"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97\") " Jan 27 14:31:58 crc kubenswrapper[4833]: W0127 14:31:58.709083 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda93af5fb_6812_4f30_9e89_e8c58b01a69e.slice/crio-7f82e4a710f88a817189a5c011c2652f98bc5756b9332faba015371cd9604d90 WatchSource:0}: Error finding container 7f82e4a710f88a817189a5c011c2652f98bc5756b9332faba015371cd9604d90: Status 404 returned error can't find the container with id 7f82e4a710f88a817189a5c011c2652f98bc5756b9332faba015371cd9604d90 Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.717706 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ht8p9"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.718725 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-kube-api-access-8c6tw" (OuterVolumeSpecName: "kube-api-access-8c6tw") pod "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" (UID: "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97"). InnerVolumeSpecName "kube-api-access-8c6tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.768951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.804012 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c6tw\" (UniqueName: \"kubernetes.io/projected/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-kube-api-access-8c6tw\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.806212 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7544d4446c-rj8rn"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.815671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" event={"ID":"c7929103-eabc-4bee-9150-7b016dd29dc1","Type":"ContainerStarted","Data":"79c1983bc0165aa35b96e958fa49b2aa088fb9d597a24f4becb84338a98a0571"} Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.833737 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ht8p9" event={"ID":"a93af5fb-6812-4f30-9e89-e8c58b01a69e","Type":"ContainerStarted","Data":"7f82e4a710f88a817189a5c011c2652f98bc5756b9332faba015371cd9604d90"} Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.840981 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2j7w8" event={"ID":"cffa0abf-c51c-480c-8650-0a90cad148ef","Type":"ContainerStarted","Data":"ca1a5c46a74fe8e7d3074bea12534efe00a027b9adb85138eb2ef01dc5420713"} Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.856364 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" event={"ID":"cbbb5d3d-ad63-4190-a51a-d1360bcc5c97","Type":"ContainerDied","Data":"b3bfe0c1b62e3503f344ac7d317aa0b38dc4cb04b1a1ebf9723ad9734133a483"} Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.856430 4833 scope.go:117] "RemoveContainer" containerID="ce331e0ab8500d445fad17894f94427561c3e49624f31f6e5b0e7ebfe9800e2e" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.856638 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-9z6h2" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.878489 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mtnxp"] Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.942056 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-config" (OuterVolumeSpecName: "config") pod "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" (UID: "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.988006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" (UID: "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:58 crc kubenswrapper[4833]: I0127 14:31:58.999400 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" (UID: "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.014114 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.014145 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.014154 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.030047 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" (UID: "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.033096 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" (UID: "cbbb5d3d-ad63-4190-a51a-d1360bcc5c97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.121737 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.121779 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.129434 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.145242 4833 scope.go:117] "RemoveContainer" containerID="336731e008b79151351f37ce0443fe36b223377ea8e582be1adbac09ba5a836a" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.207793 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f4f484fb9-t4v96"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.240595 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7795d84f4f-bz4f2"] Jan 27 14:31:59 crc kubenswrapper[4833]: E0127 14:31:59.241231 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerName="dnsmasq-dns" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.241248 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerName="dnsmasq-dns" Jan 27 14:31:59 crc kubenswrapper[4833]: E0127 14:31:59.241263 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerName="init" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.241273 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerName="init" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.241557 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" containerName="dnsmasq-dns" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.244872 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.278803 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.299425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7795d84f4f-bz4f2"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.334885 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-scripts\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.335012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b85b68-002e-4978-bc0c-311aa60f80fe-logs\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.335113 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblvv\" (UniqueName: \"kubernetes.io/projected/42b85b68-002e-4978-bc0c-311aa60f80fe-kube-api-access-sblvv\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.335143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/42b85b68-002e-4978-bc0c-311aa60f80fe-horizon-secret-key\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.335203 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-config-data\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.341927 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cjff8"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.398390 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6dqbw"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.406813 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-9z6h2"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.417847 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-9z6h2"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.439325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-scripts\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.439416 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b85b68-002e-4978-bc0c-311aa60f80fe-logs\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.439526 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sblvv\" (UniqueName: \"kubernetes.io/projected/42b85b68-002e-4978-bc0c-311aa60f80fe-kube-api-access-sblvv\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.439547 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/42b85b68-002e-4978-bc0c-311aa60f80fe-horizon-secret-key\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.439581 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-config-data\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.442141 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b85b68-002e-4978-bc0c-311aa60f80fe-logs\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.446285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-scripts\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.451075 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/42b85b68-002e-4978-bc0c-311aa60f80fe-horizon-secret-key\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.451907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-config-data\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.475133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sblvv\" (UniqueName: \"kubernetes.io/projected/42b85b68-002e-4978-bc0c-311aa60f80fe-kube-api-access-sblvv\") pod \"horizon-7795d84f4f-bz4f2\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.534176 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.577540 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.645246 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f4f484fb9-t4v96"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.689974 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.715390 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vsdp5"] Jan 27 14:31:59 crc kubenswrapper[4833]: I0127 14:31:59.813563 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:31:59 crc kubenswrapper[4833]: W0127 14:31:59.895648 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b00b686_dbb3_4ed2_a893_82aa39224575.slice/crio-9c5ce41623e5c5aea4f5be8c4f22c8f76f24782ffed786fb1542a6108e1c7f7e WatchSource:0}: Error finding container 9c5ce41623e5c5aea4f5be8c4f22c8f76f24782ffed786fb1542a6108e1c7f7e: Status 404 returned error can't find the container with id 9c5ce41623e5c5aea4f5be8c4f22c8f76f24782ffed786fb1542a6108e1c7f7e Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.002570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerStarted","Data":"25ffafadfbfe2dd95ef9e333a4c17c9aea0a169ae500378b3c153db5610bdf4c"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.017119 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4f484fb9-t4v96" event={"ID":"62584aaf-3f85-4383-8da4-ea3366edf67c","Type":"ContainerStarted","Data":"d6e749dea9e53b842677aa9f91ce07e5d1577974202d7273ea00cc8275c7cc76"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.033988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtnxp" event={"ID":"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b","Type":"ContainerStarted","Data":"a1ed3db3a6ce62035fd7c48adee237998df1d76a854be80c61fb77305399c6bd"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.034036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtnxp" event={"ID":"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b","Type":"ContainerStarted","Data":"a050e6c980116e35a4aab5792a5a4c45476ab013a3d4884347a525a8778cc623"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.044238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" event={"ID":"10bb060d-4709-441d-859b-65bf70812174","Type":"ContainerStarted","Data":"7a881e9e3a7bba67db442acb977c2c117f151409087056e1aada6699b883c8a9"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.061982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7544d4446c-rj8rn" event={"ID":"0a306119-60bc-4953-ba3a-3e9a6ff99959","Type":"ContainerStarted","Data":"5faff59a7cf42ed6aa148daac795723653c84815eff132d1ee327a421b4baa1a"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.069168 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mtnxp" podStartSLOduration=3.069146714 podStartE2EDuration="3.069146714s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:00.062232609 +0000 UTC m=+1221.713557011" watchObservedRunningTime="2026-01-27 14:32:00.069146714 +0000 UTC m=+1221.720471116" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.078553 4833 generic.go:334] "Generic (PLEG): container finished" podID="c7929103-eabc-4bee-9150-7b016dd29dc1" containerID="8c2d90890093d043b574cd61d929ae79c9b4b242ad12f4695ec9902b6f64094d" exitCode=0 Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.078631 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" event={"ID":"c7929103-eabc-4bee-9150-7b016dd29dc1","Type":"ContainerDied","Data":"8c2d90890093d043b574cd61d929ae79c9b4b242ad12f4695ec9902b6f64094d"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.095629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6dqbw" event={"ID":"a324f832-7082-443a-87c7-3cef46ebe7ea","Type":"ContainerStarted","Data":"12c15c044b482b0bc12c89a70a6a62c9a1a5e758c2fd02eb439e78f257f545a4"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.098077 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2j7w8" event={"ID":"cffa0abf-c51c-480c-8650-0a90cad148ef","Type":"ContainerStarted","Data":"1448574a4fa6bf4f370a7b1032c0cc545ac2d9f1f7f3845ce6f0c685b43efdd7"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.099790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cjff8" event={"ID":"32678252-925f-4f5c-9602-5409032b6063","Type":"ContainerStarted","Data":"a72ac9feae6c29dfb12671b2399b0d8173d85d4dd1d97318a997e2de637052de"} Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.191744 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2j7w8" podStartSLOduration=4.191723679 podStartE2EDuration="4.191723679s" podCreationTimestamp="2026-01-27 14:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:00.147386182 +0000 UTC m=+1221.798710604" watchObservedRunningTime="2026-01-27 14:32:00.191723679 +0000 UTC m=+1221.843048081" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.194385 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.372494 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7795d84f4f-bz4f2"] Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.813333 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.911111 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssqdh\" (UniqueName: \"kubernetes.io/projected/c7929103-eabc-4bee-9150-7b016dd29dc1-kube-api-access-ssqdh\") pod \"c7929103-eabc-4bee-9150-7b016dd29dc1\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.911217 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-sb\") pod \"c7929103-eabc-4bee-9150-7b016dd29dc1\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.911309 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-swift-storage-0\") pod \"c7929103-eabc-4bee-9150-7b016dd29dc1\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.911371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-nb\") pod \"c7929103-eabc-4bee-9150-7b016dd29dc1\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.911504 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-config\") pod \"c7929103-eabc-4bee-9150-7b016dd29dc1\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.911706 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-svc\") pod \"c7929103-eabc-4bee-9150-7b016dd29dc1\" (UID: \"c7929103-eabc-4bee-9150-7b016dd29dc1\") " Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.935796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7929103-eabc-4bee-9150-7b016dd29dc1-kube-api-access-ssqdh" (OuterVolumeSpecName: "kube-api-access-ssqdh") pod "c7929103-eabc-4bee-9150-7b016dd29dc1" (UID: "c7929103-eabc-4bee-9150-7b016dd29dc1"). InnerVolumeSpecName "kube-api-access-ssqdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.948749 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7929103-eabc-4bee-9150-7b016dd29dc1" (UID: "c7929103-eabc-4bee-9150-7b016dd29dc1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.949073 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7929103-eabc-4bee-9150-7b016dd29dc1" (UID: "c7929103-eabc-4bee-9150-7b016dd29dc1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.952434 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7929103-eabc-4bee-9150-7b016dd29dc1" (UID: "c7929103-eabc-4bee-9150-7b016dd29dc1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.959466 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7929103-eabc-4bee-9150-7b016dd29dc1" (UID: "c7929103-eabc-4bee-9150-7b016dd29dc1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:00 crc kubenswrapper[4833]: I0127 14:32:00.962500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-config" (OuterVolumeSpecName: "config") pod "c7929103-eabc-4bee-9150-7b016dd29dc1" (UID: "c7929103-eabc-4bee-9150-7b016dd29dc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.021796 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.022210 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssqdh\" (UniqueName: \"kubernetes.io/projected/c7929103-eabc-4bee-9150-7b016dd29dc1-kube-api-access-ssqdh\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.022227 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.022240 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.022252 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.022265 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929103-eabc-4bee-9150-7b016dd29dc1-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.116093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" event={"ID":"c7929103-eabc-4bee-9150-7b016dd29dc1","Type":"ContainerDied","Data":"79c1983bc0165aa35b96e958fa49b2aa088fb9d597a24f4becb84338a98a0571"} Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.116143 4833 scope.go:117] "RemoveContainer" containerID="8c2d90890093d043b574cd61d929ae79c9b4b242ad12f4695ec9902b6f64094d" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.116301 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-2jltb" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.134892 4833 generic.go:334] "Generic (PLEG): container finished" podID="10bb060d-4709-441d-859b-65bf70812174" containerID="00f617fca33466c92a5979ead869e755bb22c1d5c358e90e133531bce2ecbf47" exitCode=0 Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.134999 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" event={"ID":"10bb060d-4709-441d-859b-65bf70812174","Type":"ContainerDied","Data":"00f617fca33466c92a5979ead869e755bb22c1d5c358e90e133531bce2ecbf47"} Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.143351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d1b347e-564b-4691-b115-5395f6b335cf","Type":"ContainerStarted","Data":"6a203a72a0ef01982a46ba208efcee53bb70c4bbd82fc06d9b8e9d457d4493c9"} Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.145766 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7795d84f4f-bz4f2" event={"ID":"42b85b68-002e-4978-bc0c-311aa60f80fe","Type":"ContainerStarted","Data":"4d43bc9b157faa35acd5288410c31c96a730e8eaedeb8e5cfba18fbdabdbc5c9"} Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.149271 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b00b686-dbb3-4ed2-a893-82aa39224575","Type":"ContainerStarted","Data":"9c5ce41623e5c5aea4f5be8c4f22c8f76f24782ffed786fb1542a6108e1c7f7e"} Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.224296 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbbb5d3d-ad63-4190-a51a-d1360bcc5c97" path="/var/lib/kubelet/pods/cbbb5d3d-ad63-4190-a51a-d1360bcc5c97/volumes" Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.311397 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-2jltb"] Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.323061 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-2jltb"] Jan 27 14:32:01 crc kubenswrapper[4833]: I0127 14:32:01.421917 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:02 crc kubenswrapper[4833]: I0127 14:32:02.177901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d1b347e-564b-4691-b115-5395f6b335cf","Type":"ContainerStarted","Data":"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2"} Jan 27 14:32:02 crc kubenswrapper[4833]: I0127 14:32:02.180096 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b00b686-dbb3-4ed2-a893-82aa39224575","Type":"ContainerStarted","Data":"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43"} Jan 27 14:32:02 crc kubenswrapper[4833]: I0127 14:32:02.183424 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" event={"ID":"10bb060d-4709-441d-859b-65bf70812174","Type":"ContainerStarted","Data":"c19944e21f04bb148433f38c6652ada961744c5791d797dee80faccce8c4c7dc"} Jan 27 14:32:02 crc kubenswrapper[4833]: I0127 14:32:02.184510 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:32:02 crc kubenswrapper[4833]: I0127 14:32:02.202428 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" podStartSLOduration=5.202411491 podStartE2EDuration="5.202411491s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:02.201489329 +0000 UTC m=+1223.852813751" watchObservedRunningTime="2026-01-27 14:32:02.202411491 +0000 UTC m=+1223.853735893" Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.195875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d1b347e-564b-4691-b115-5395f6b335cf","Type":"ContainerStarted","Data":"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14"} Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.195970 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-log" containerID="cri-o://f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2" gracePeriod=30 Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.196011 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-httpd" containerID="cri-o://c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14" gracePeriod=30 Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.201382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b00b686-dbb3-4ed2-a893-82aa39224575","Type":"ContainerStarted","Data":"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978"} Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.201646 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-log" containerID="cri-o://43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43" gracePeriod=30 Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.201803 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-httpd" containerID="cri-o://b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978" gracePeriod=30 Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.228096 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.228075887 podStartE2EDuration="6.228075887s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:03.215860185 +0000 UTC m=+1224.867184587" watchObservedRunningTime="2026-01-27 14:32:03.228075887 +0000 UTC m=+1224.879400289" Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.231715 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7929103-eabc-4bee-9150-7b016dd29dc1" path="/var/lib/kubelet/pods/c7929103-eabc-4bee-9150-7b016dd29dc1/volumes" Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.273554 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.273531292 podStartE2EDuration="6.273531292s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:03.256760381 +0000 UTC m=+1224.908084803" watchObservedRunningTime="2026-01-27 14:32:03.273531292 +0000 UTC m=+1224.924855694" Jan 27 14:32:03 crc kubenswrapper[4833]: I0127 14:32:03.933427 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.019381 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.103595 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-config-data\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj2l4\" (UniqueName: \"kubernetes.io/projected/4b00b686-dbb3-4ed2-a893-82aa39224575-kube-api-access-tj2l4\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104263 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-combined-ca-bundle\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104324 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-internal-tls-certs\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104340 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-scripts\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-logs\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-config-data\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbvd5\" (UniqueName: \"kubernetes.io/projected/6d1b347e-564b-4691-b115-5395f6b335cf-kube-api-access-qbvd5\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104436 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-combined-ca-bundle\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104529 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-httpd-run\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104563 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-scripts\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104600 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-httpd-run\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104632 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-public-tls-certs\") pod \"4b00b686-dbb3-4ed2-a893-82aa39224575\" (UID: \"4b00b686-dbb3-4ed2-a893-82aa39224575\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104667 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.104702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-logs\") pod \"6d1b347e-564b-4691-b115-5395f6b335cf\" (UID: \"6d1b347e-564b-4691-b115-5395f6b335cf\") " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.105279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-logs" (OuterVolumeSpecName: "logs") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.105692 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.105716 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.106194 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-logs" (OuterVolumeSpecName: "logs") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.111925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-scripts" (OuterVolumeSpecName: "scripts") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.117955 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.118333 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1b347e-564b-4691-b115-5395f6b335cf-kube-api-access-qbvd5" (OuterVolumeSpecName: "kube-api-access-qbvd5") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "kube-api-access-qbvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.119734 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-scripts" (OuterVolumeSpecName: "scripts") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.120251 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b00b686-dbb3-4ed2-a893-82aa39224575-kube-api-access-tj2l4" (OuterVolumeSpecName: "kube-api-access-tj2l4") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "kube-api-access-tj2l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.125324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.166402 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.175669 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206115 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbvd5\" (UniqueName: \"kubernetes.io/projected/6d1b347e-564b-4691-b115-5395f6b335cf-kube-api-access-qbvd5\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206142 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206153 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206162 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206170 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206196 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206204 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1b347e-564b-4691-b115-5395f6b335cf-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206212 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj2l4\" (UniqueName: \"kubernetes.io/projected/4b00b686-dbb3-4ed2-a893-82aa39224575-kube-api-access-tj2l4\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206225 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206234 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206242 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.206249 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b00b686-dbb3-4ed2-a893-82aa39224575-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.213508 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-config-data" (OuterVolumeSpecName: "config-data") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.215333 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.215384 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d1b347e-564b-4691-b115-5395f6b335cf" containerID="c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14" exitCode=0 Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.215416 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.215468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d1b347e-564b-4691-b115-5395f6b335cf","Type":"ContainerDied","Data":"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.215418 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d1b347e-564b-4691-b115-5395f6b335cf" containerID="f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2" exitCode=143 Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.215836 4833 scope.go:117] "RemoveContainer" containerID="c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.216063 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d1b347e-564b-4691-b115-5395f6b335cf","Type":"ContainerDied","Data":"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.216086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6d1b347e-564b-4691-b115-5395f6b335cf","Type":"ContainerDied","Data":"6a203a72a0ef01982a46ba208efcee53bb70c4bbd82fc06d9b8e9d457d4493c9"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.223763 4833 generic.go:334] "Generic (PLEG): container finished" podID="cffa0abf-c51c-480c-8650-0a90cad148ef" containerID="1448574a4fa6bf4f370a7b1032c0cc545ac2d9f1f7f3845ce6f0c685b43efdd7" exitCode=0 Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.223836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2j7w8" event={"ID":"cffa0abf-c51c-480c-8650-0a90cad148ef","Type":"ContainerDied","Data":"1448574a4fa6bf4f370a7b1032c0cc545ac2d9f1f7f3845ce6f0c685b43efdd7"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.233367 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.235319 4833 generic.go:334] "Generic (PLEG): container finished" podID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerID="b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978" exitCode=0 Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.235365 4833 generic.go:334] "Generic (PLEG): container finished" podID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerID="43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43" exitCode=143 Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.236456 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.236604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b00b686-dbb3-4ed2-a893-82aa39224575","Type":"ContainerDied","Data":"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.236638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b00b686-dbb3-4ed2-a893-82aa39224575","Type":"ContainerDied","Data":"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.236651 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b00b686-dbb3-4ed2-a893-82aa39224575","Type":"ContainerDied","Data":"9c5ce41623e5c5aea4f5be8c4f22c8f76f24782ffed786fb1542a6108e1c7f7e"} Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.251438 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-config-data" (OuterVolumeSpecName: "config-data") pod "4b00b686-dbb3-4ed2-a893-82aa39224575" (UID: "4b00b686-dbb3-4ed2-a893-82aa39224575"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.255877 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.275035 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6d1b347e-564b-4691-b115-5395f6b335cf" (UID: "6d1b347e-564b-4691-b115-5395f6b335cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.308528 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.308561 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.308570 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.308577 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.308586 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1b347e-564b-4691-b115-5395f6b335cf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.308594 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b00b686-dbb3-4ed2-a893-82aa39224575-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.548753 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.571020 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.604179 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: E0127 14:32:04.604689 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-httpd" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.604711 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-httpd" Jan 27 14:32:04 crc kubenswrapper[4833]: E0127 14:32:04.604736 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7929103-eabc-4bee-9150-7b016dd29dc1" containerName="init" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.604743 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7929103-eabc-4bee-9150-7b016dd29dc1" containerName="init" Jan 27 14:32:04 crc kubenswrapper[4833]: E0127 14:32:04.604763 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-log" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.604769 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-log" Jan 27 14:32:04 crc kubenswrapper[4833]: E0127 14:32:04.604779 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-log" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.604785 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-log" Jan 27 14:32:04 crc kubenswrapper[4833]: E0127 14:32:04.604804 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-httpd" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.604810 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-httpd" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.605023 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7929103-eabc-4bee-9150-7b016dd29dc1" containerName="init" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.605034 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-httpd" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.605045 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-log" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.605055 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" containerName="glance-log" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.605061 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" containerName="glance-httpd" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.605965 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.612986 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7fbqk" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.613238 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.613366 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.613472 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.647301 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.656783 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.676720 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.685970 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.689080 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.691586 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.693984 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.696353 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716629 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716684 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj8sw\" (UniqueName: \"kubernetes.io/projected/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-kube-api-access-zj8sw\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716762 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716817 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.716900 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-logs\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818638 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818717 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818762 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818788 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj8sw\" (UniqueName: \"kubernetes.io/projected/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-kube-api-access-zj8sw\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818846 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818937 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtvrw\" (UniqueName: \"kubernetes.io/projected/bc4a525b-9d53-4958-aa0a-4fb793ff8415-kube-api-access-gtvrw\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-config-data\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.818996 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.819037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-logs\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.819086 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.819124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.819149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-logs\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.819174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-scripts\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.825638 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-logs\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.836128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.836456 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.841197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.862362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.863100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.889628 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.918655 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj8sw\" (UniqueName: \"kubernetes.io/projected/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-kube-api-access-zj8sw\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.923660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.923753 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.923797 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-scripts\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.923844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.924406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.924482 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtvrw\" (UniqueName: \"kubernetes.io/projected/bc4a525b-9d53-4958-aa0a-4fb793ff8415-kube-api-access-gtvrw\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.924524 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-config-data\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.925053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.925116 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-logs\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.925549 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-logs\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.925842 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.935578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.936667 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-scripts\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.958052 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-config-data\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.960945 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:04 crc kubenswrapper[4833]: I0127 14:32:04.966815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtvrw\" (UniqueName: \"kubernetes.io/projected/bc4a525b-9d53-4958-aa0a-4fb793ff8415-kube-api-access-gtvrw\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.037701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.077114 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.229886 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b00b686-dbb3-4ed2-a893-82aa39224575" path="/var/lib/kubelet/pods/4b00b686-dbb3-4ed2-a893-82aa39224575/volumes" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.230894 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.231077 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1b347e-564b-4691-b115-5395f6b335cf" path="/var/lib/kubelet/pods/6d1b347e-564b-4691-b115-5395f6b335cf/volumes" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.370178 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.806220 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7544d4446c-rj8rn"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.846196 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54f64dd7dd-8w4dp"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.848010 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.850689 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.864514 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54f64dd7dd-8w4dp"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.870329 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.908044 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7795d84f4f-bz4f2"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.924517 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cd9489696-52kzm"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.926397 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.953733 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cd9489696-52kzm"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.962457 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.982380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-scripts\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.982635 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-combined-ca-bundle\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.982709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-tls-certs\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.982810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-logs\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.982874 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmjh\" (UniqueName: \"kubernetes.io/projected/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-kube-api-access-klmjh\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.982944 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-secret-key\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:05 crc kubenswrapper[4833]: I0127 14:32:05.983002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-config-data\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.085386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534c5b75-240a-4ded-bb13-f05eb3158527-logs\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.085871 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-scripts\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.085996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-combined-ca-bundle\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-horizon-secret-key\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086037 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/534c5b75-240a-4ded-bb13-f05eb3158527-config-data\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-tls-certs\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982x8\" (UniqueName: \"kubernetes.io/projected/534c5b75-240a-4ded-bb13-f05eb3158527-kube-api-access-982x8\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/534c5b75-240a-4ded-bb13-f05eb3158527-scripts\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086505 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-logs\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086626 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klmjh\" (UniqueName: \"kubernetes.io/projected/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-kube-api-access-klmjh\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-secret-key\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086678 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-config-data\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.086912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-combined-ca-bundle\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.087057 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-horizon-tls-certs\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.088496 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-scripts\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.088820 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-logs\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.090014 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-config-data\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.094155 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-combined-ca-bundle\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.094652 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-secret-key\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.094721 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-tls-certs\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.104402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klmjh\" (UniqueName: \"kubernetes.io/projected/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-kube-api-access-klmjh\") pod \"horizon-54f64dd7dd-8w4dp\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.187286 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982x8\" (UniqueName: \"kubernetes.io/projected/534c5b75-240a-4ded-bb13-f05eb3158527-kube-api-access-982x8\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/534c5b75-240a-4ded-bb13-f05eb3158527-scripts\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-combined-ca-bundle\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-horizon-tls-certs\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192416 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534c5b75-240a-4ded-bb13-f05eb3158527-logs\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-horizon-secret-key\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.192545 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/534c5b75-240a-4ded-bb13-f05eb3158527-config-data\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.193728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/534c5b75-240a-4ded-bb13-f05eb3158527-scripts\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.193796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/534c5b75-240a-4ded-bb13-f05eb3158527-config-data\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.194138 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534c5b75-240a-4ded-bb13-f05eb3158527-logs\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.197612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-horizon-tls-certs\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.198391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-combined-ca-bundle\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.199220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/534c5b75-240a-4ded-bb13-f05eb3158527-horizon-secret-key\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.209382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-982x8\" (UniqueName: \"kubernetes.io/projected/534c5b75-240a-4ded-bb13-f05eb3158527-kube-api-access-982x8\") pod \"horizon-6cd9489696-52kzm\" (UID: \"534c5b75-240a-4ded-bb13-f05eb3158527\") " pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:06 crc kubenswrapper[4833]: I0127 14:32:06.262125 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:08 crc kubenswrapper[4833]: I0127 14:32:08.400649 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:32:08 crc kubenswrapper[4833]: I0127 14:32:08.457894 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pj4ff"] Jan 27 14:32:08 crc kubenswrapper[4833]: I0127 14:32:08.458170 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" containerID="cri-o://bf8cc554b13117fc6146d9266ac1ff8fa909587717b0c5e0ec0ebd03c2405ee6" gracePeriod=10 Jan 27 14:32:08 crc kubenswrapper[4833]: I0127 14:32:08.893151 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 27 14:32:09 crc kubenswrapper[4833]: I0127 14:32:09.300668 4833 generic.go:334] "Generic (PLEG): container finished" podID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerID="bf8cc554b13117fc6146d9266ac1ff8fa909587717b0c5e0ec0ebd03c2405ee6" exitCode=0 Jan 27 14:32:09 crc kubenswrapper[4833]: I0127 14:32:09.300734 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" event={"ID":"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2","Type":"ContainerDied","Data":"bf8cc554b13117fc6146d9266ac1ff8fa909587717b0c5e0ec0ebd03c2405ee6"} Jan 27 14:32:11 crc kubenswrapper[4833]: I0127 14:32:11.422277 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:11 crc kubenswrapper[4833]: I0127 14:32:11.427773 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:12 crc kubenswrapper[4833]: I0127 14:32:12.338662 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:13 crc kubenswrapper[4833]: I0127 14:32:13.893142 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.030122 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.181760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-combined-ca-bundle\") pod \"cffa0abf-c51c-480c-8650-0a90cad148ef\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.181871 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-fernet-keys\") pod \"cffa0abf-c51c-480c-8650-0a90cad148ef\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.181960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-config-data\") pod \"cffa0abf-c51c-480c-8650-0a90cad148ef\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.182071 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-scripts\") pod \"cffa0abf-c51c-480c-8650-0a90cad148ef\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.182114 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-credential-keys\") pod \"cffa0abf-c51c-480c-8650-0a90cad148ef\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.182158 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp6fl\" (UniqueName: \"kubernetes.io/projected/cffa0abf-c51c-480c-8650-0a90cad148ef-kube-api-access-wp6fl\") pod \"cffa0abf-c51c-480c-8650-0a90cad148ef\" (UID: \"cffa0abf-c51c-480c-8650-0a90cad148ef\") " Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.188022 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cffa0abf-c51c-480c-8650-0a90cad148ef" (UID: "cffa0abf-c51c-480c-8650-0a90cad148ef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.189649 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-scripts" (OuterVolumeSpecName: "scripts") pod "cffa0abf-c51c-480c-8650-0a90cad148ef" (UID: "cffa0abf-c51c-480c-8650-0a90cad148ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.194188 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cffa0abf-c51c-480c-8650-0a90cad148ef" (UID: "cffa0abf-c51c-480c-8650-0a90cad148ef"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.196023 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffa0abf-c51c-480c-8650-0a90cad148ef-kube-api-access-wp6fl" (OuterVolumeSpecName: "kube-api-access-wp6fl") pod "cffa0abf-c51c-480c-8650-0a90cad148ef" (UID: "cffa0abf-c51c-480c-8650-0a90cad148ef"). InnerVolumeSpecName "kube-api-access-wp6fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.212419 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-config-data" (OuterVolumeSpecName: "config-data") pod "cffa0abf-c51c-480c-8650-0a90cad148ef" (UID: "cffa0abf-c51c-480c-8650-0a90cad148ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.218240 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cffa0abf-c51c-480c-8650-0a90cad148ef" (UID: "cffa0abf-c51c-480c-8650-0a90cad148ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.285164 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.285214 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.285235 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp6fl\" (UniqueName: \"kubernetes.io/projected/cffa0abf-c51c-480c-8650-0a90cad148ef-kube-api-access-wp6fl\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.285251 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.285267 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.285284 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa0abf-c51c-480c-8650-0a90cad148ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.355772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2j7w8" event={"ID":"cffa0abf-c51c-480c-8650-0a90cad148ef","Type":"ContainerDied","Data":"ca1a5c46a74fe8e7d3074bea12534efe00a027b9adb85138eb2ef01dc5420713"} Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.355807 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca1a5c46a74fe8e7d3074bea12534efe00a027b9adb85138eb2ef01dc5420713" Jan 27 14:32:14 crc kubenswrapper[4833]: I0127 14:32:14.355850 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2j7w8" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.222941 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2j7w8"] Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.225308 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2j7w8"] Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.306893 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-86px7"] Jan 27 14:32:15 crc kubenswrapper[4833]: E0127 14:32:15.307281 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffa0abf-c51c-480c-8650-0a90cad148ef" containerName="keystone-bootstrap" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.307299 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffa0abf-c51c-480c-8650-0a90cad148ef" containerName="keystone-bootstrap" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.307494 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffa0abf-c51c-480c-8650-0a90cad148ef" containerName="keystone-bootstrap" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.308061 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.310000 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.310242 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fg8kt" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.310260 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.311016 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.311962 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.324998 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-86px7"] Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.407929 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-combined-ca-bundle\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.408368 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-credential-keys\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.408477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt2mx\" (UniqueName: \"kubernetes.io/projected/5a57c811-cef6-458c-bb52-ef9e0861e39a-kube-api-access-mt2mx\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.408507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-scripts\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.408532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-fernet-keys\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.408603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-config-data\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.510686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-credential-keys\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.511641 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt2mx\" (UniqueName: \"kubernetes.io/projected/5a57c811-cef6-458c-bb52-ef9e0861e39a-kube-api-access-mt2mx\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.511768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-scripts\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.511874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-fernet-keys\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.512098 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-config-data\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.512219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-combined-ca-bundle\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.517866 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-combined-ca-bundle\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.518639 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-credential-keys\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.518968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-fernet-keys\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.521374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-config-data\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.532858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-scripts\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.540533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt2mx\" (UniqueName: \"kubernetes.io/projected/5a57c811-cef6-458c-bb52-ef9e0861e39a-kube-api-access-mt2mx\") pod \"keystone-bootstrap-86px7\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:15 crc kubenswrapper[4833]: I0127 14:32:15.628553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:17 crc kubenswrapper[4833]: I0127 14:32:17.220765 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffa0abf-c51c-480c-8650-0a90cad148ef" path="/var/lib/kubelet/pods/cffa0abf-c51c-480c-8650-0a90cad148ef/volumes" Jan 27 14:32:18 crc kubenswrapper[4833]: I0127 14:32:18.893388 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Jan 27 14:32:18 crc kubenswrapper[4833]: I0127 14:32:18.893776 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:32:18 crc kubenswrapper[4833]: E0127 14:32:18.970500 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 27 14:32:18 crc kubenswrapper[4833]: E0127 14:32:18.971153 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnbb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-6dqbw_openstack(a324f832-7082-443a-87c7-3cef46ebe7ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:32:18 crc kubenswrapper[4833]: E0127 14:32:18.972685 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-6dqbw" podUID="a324f832-7082-443a-87c7-3cef46ebe7ea" Jan 27 14:32:19 crc kubenswrapper[4833]: E0127 14:32:19.402301 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-6dqbw" podUID="a324f832-7082-443a-87c7-3cef46ebe7ea" Jan 27 14:32:20 crc kubenswrapper[4833]: E0127 14:32:20.951105 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 27 14:32:20 crc kubenswrapper[4833]: E0127 14:32:20.951522 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n686h5c5h594hd4h557h669hf7h664h68fh57h58h9hb8h58dh59dh68dhd5hcch5d7h5fbh9ch97h65dh56h557h86hfh67bh67dh8chbch685q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqt9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f4f484fb9-t4v96_openstack(62584aaf-3f85-4383-8da4-ea3366edf67c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:32:20 crc kubenswrapper[4833]: E0127 14:32:20.953956 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f4f484fb9-t4v96" podUID="62584aaf-3f85-4383-8da4-ea3366edf67c" Jan 27 14:32:23 crc kubenswrapper[4833]: I0127 14:32:23.440511 4833 generic.go:334] "Generic (PLEG): container finished" podID="5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" containerID="a1ed3db3a6ce62035fd7c48adee237998df1d76a854be80c61fb77305399c6bd" exitCode=0 Jan 27 14:32:23 crc kubenswrapper[4833]: I0127 14:32:23.440647 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtnxp" event={"ID":"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b","Type":"ContainerDied","Data":"a1ed3db3a6ce62035fd7c48adee237998df1d76a854be80c61fb77305399c6bd"} Jan 27 14:32:28 crc kubenswrapper[4833]: I0127 14:32:28.893612 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.115151 4833 scope.go:117] "RemoveContainer" containerID="f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2" Jan 27 14:32:30 crc kubenswrapper[4833]: E0127 14:32:30.772951 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 14:32:30 crc kubenswrapper[4833]: E0127 14:32:30.773343 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rbgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cjff8_openstack(32678252-925f-4f5c-9602-5409032b6063): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:32:30 crc kubenswrapper[4833]: E0127 14:32:30.775300 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cjff8" podUID="32678252-925f-4f5c-9602-5409032b6063" Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.848869 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.979969 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmr5j\" (UniqueName: \"kubernetes.io/projected/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-kube-api-access-gmr5j\") pod \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.980101 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-nb\") pod \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.980188 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-config\") pod \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.980256 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-dns-svc\") pod \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.980287 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-sb\") pod \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\" (UID: \"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2\") " Jan 27 14:32:30 crc kubenswrapper[4833]: I0127 14:32:30.986812 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-kube-api-access-gmr5j" (OuterVolumeSpecName: "kube-api-access-gmr5j") pod "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" (UID: "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2"). InnerVolumeSpecName "kube-api-access-gmr5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.042470 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" (UID: "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.047412 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-config" (OuterVolumeSpecName: "config") pod "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" (UID: "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.065899 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" (UID: "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.077564 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" (UID: "50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.082711 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.082743 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.082752 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.082760 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.082769 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmr5j\" (UniqueName: \"kubernetes.io/projected/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2-kube-api-access-gmr5j\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.515117 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.515099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" event={"ID":"50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2","Type":"ContainerDied","Data":"6f8de7ab18d91a26e3c28dbef40d408370e6ecad453485d6f44d686e6ff582ab"} Jan 27 14:32:31 crc kubenswrapper[4833]: E0127 14:32:31.518743 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cjff8" podUID="32678252-925f-4f5c-9602-5409032b6063" Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.549821 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pj4ff"] Jan 27 14:32:31 crc kubenswrapper[4833]: I0127 14:32:31.558969 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-pj4ff"] Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.068697 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.069066 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdhpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ht8p9_openstack(a93af5fb-6812-4f30-9e89-e8c58b01a69e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.070324 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ht8p9" podUID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.158341 4833 scope.go:117] "RemoveContainer" containerID="c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.161624 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14\": container with ID starting with c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14 not found: ID does not exist" containerID="c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.161679 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14"} err="failed to get container status \"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14\": rpc error: code = NotFound desc = could not find container \"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14\": container with ID starting with c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.161719 4833 scope.go:117] "RemoveContainer" containerID="f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.163280 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2\": container with ID starting with f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2 not found: ID does not exist" containerID="f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.163534 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2"} err="failed to get container status \"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2\": rpc error: code = NotFound desc = could not find container \"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2\": container with ID starting with f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.163628 4833 scope.go:117] "RemoveContainer" containerID="c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.164402 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14"} err="failed to get container status \"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14\": rpc error: code = NotFound desc = could not find container \"c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14\": container with ID starting with c37308c5ed49f6c56757d608f7b968bfeadb0e53d9b511da24290f69dc4e7d14 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.164471 4833 scope.go:117] "RemoveContainer" containerID="f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.164873 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2"} err="failed to get container status \"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2\": rpc error: code = NotFound desc = could not find container \"f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2\": container with ID starting with f3c136c6be3f053e6c18646b1e5d5bf9c967bb26825ec8b4effdc113d1cacab2 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.164904 4833 scope.go:117] "RemoveContainer" containerID="b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.177631 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.187358 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.205931 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62584aaf-3f85-4383-8da4-ea3366edf67c-logs\") pod \"62584aaf-3f85-4383-8da4-ea3366edf67c\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.206420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62584aaf-3f85-4383-8da4-ea3366edf67c-logs" (OuterVolumeSpecName: "logs") pod "62584aaf-3f85-4383-8da4-ea3366edf67c" (UID: "62584aaf-3f85-4383-8da4-ea3366edf67c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.207242 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62584aaf-3f85-4383-8da4-ea3366edf67c-horizon-secret-key\") pod \"62584aaf-3f85-4383-8da4-ea3366edf67c\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.207320 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqt9w\" (UniqueName: \"kubernetes.io/projected/62584aaf-3f85-4383-8da4-ea3366edf67c-kube-api-access-jqt9w\") pod \"62584aaf-3f85-4383-8da4-ea3366edf67c\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.207352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-scripts\") pod \"62584aaf-3f85-4383-8da4-ea3366edf67c\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.208077 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-scripts" (OuterVolumeSpecName: "scripts") pod "62584aaf-3f85-4383-8da4-ea3366edf67c" (UID: "62584aaf-3f85-4383-8da4-ea3366edf67c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.208710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-config\") pod \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.208760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-config-data\") pod \"62584aaf-3f85-4383-8da4-ea3366edf67c\" (UID: \"62584aaf-3f85-4383-8da4-ea3366edf67c\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.209352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-combined-ca-bundle\") pod \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.209405 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kn4z\" (UniqueName: \"kubernetes.io/projected/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-kube-api-access-9kn4z\") pod \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\" (UID: \"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b\") " Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.210333 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-config-data" (OuterVolumeSpecName: "config-data") pod "62584aaf-3f85-4383-8da4-ea3366edf67c" (UID: "62584aaf-3f85-4383-8da4-ea3366edf67c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.215236 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62584aaf-3f85-4383-8da4-ea3366edf67c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.215270 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.215281 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62584aaf-3f85-4383-8da4-ea3366edf67c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.223965 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-kube-api-access-9kn4z" (OuterVolumeSpecName: "kube-api-access-9kn4z") pod "5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" (UID: "5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b"). InnerVolumeSpecName "kube-api-access-9kn4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.226181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62584aaf-3f85-4383-8da4-ea3366edf67c-kube-api-access-jqt9w" (OuterVolumeSpecName: "kube-api-access-jqt9w") pod "62584aaf-3f85-4383-8da4-ea3366edf67c" (UID: "62584aaf-3f85-4383-8da4-ea3366edf67c"). InnerVolumeSpecName "kube-api-access-jqt9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.236791 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62584aaf-3f85-4383-8da4-ea3366edf67c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "62584aaf-3f85-4383-8da4-ea3366edf67c" (UID: "62584aaf-3f85-4383-8da4-ea3366edf67c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.252779 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-config" (OuterVolumeSpecName: "config") pod "5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" (UID: "5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.268369 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" (UID: "5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.279277 4833 scope.go:117] "RemoveContainer" containerID="43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.316547 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.316749 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kn4z\" (UniqueName: \"kubernetes.io/projected/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-kube-api-access-9kn4z\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.316760 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62584aaf-3f85-4383-8da4-ea3366edf67c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.316770 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqt9w\" (UniqueName: \"kubernetes.io/projected/62584aaf-3f85-4383-8da4-ea3366edf67c-kube-api-access-jqt9w\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.316778 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.433270 4833 scope.go:117] "RemoveContainer" containerID="b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.436646 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978\": container with ID starting with b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978 not found: ID does not exist" containerID="b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.436726 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978"} err="failed to get container status \"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978\": rpc error: code = NotFound desc = could not find container \"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978\": container with ID starting with b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.436767 4833 scope.go:117] "RemoveContainer" containerID="43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.438590 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43\": container with ID starting with 43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43 not found: ID does not exist" containerID="43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.438634 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43"} err="failed to get container status \"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43\": rpc error: code = NotFound desc = could not find container \"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43\": container with ID starting with 43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.438664 4833 scope.go:117] "RemoveContainer" containerID="b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.439646 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978"} err="failed to get container status \"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978\": rpc error: code = NotFound desc = could not find container \"b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978\": container with ID starting with b2f6e1f4c5c9855c3163193da9ff229ae0976774e3aaaaeb075870321d843978 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.442853 4833 scope.go:117] "RemoveContainer" containerID="43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.443509 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43"} err="failed to get container status \"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43\": rpc error: code = NotFound desc = could not find container \"43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43\": container with ID starting with 43456c394079a641e35cb90c690be73e8a0b5f392794c14e12853854a16edc43 not found: ID does not exist" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.443556 4833 scope.go:117] "RemoveContainer" containerID="bf8cc554b13117fc6146d9266ac1ff8fa909587717b0c5e0ec0ebd03c2405ee6" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.486698 4833 scope.go:117] "RemoveContainer" containerID="8395c71513ab3346d0e93a2ecb1f581533d856fcc28e88e7caf2c93d0d8de72f" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.532847 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cd9489696-52kzm"] Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.549246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4f484fb9-t4v96" event={"ID":"62584aaf-3f85-4383-8da4-ea3366edf67c","Type":"ContainerDied","Data":"d6e749dea9e53b842677aa9f91ce07e5d1577974202d7273ea00cc8275c7cc76"} Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.549333 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4f484fb9-t4v96" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.558386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtnxp" event={"ID":"5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b","Type":"ContainerDied","Data":"a050e6c980116e35a4aab5792a5a4c45476ab013a3d4884347a525a8778cc623"} Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.558424 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a050e6c980116e35a4aab5792a5a4c45476ab013a3d4884347a525a8778cc623" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.558498 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtnxp" Jan 27 14:32:32 crc kubenswrapper[4833]: E0127 14:32:32.580606 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-ht8p9" podUID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.743469 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f4f484fb9-t4v96"] Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.768373 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f4f484fb9-t4v96"] Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.781712 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54f64dd7dd-8w4dp"] Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.799571 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.856473 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-86px7"] Jan 27 14:32:32 crc kubenswrapper[4833]: I0127 14:32:32.919228 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.347012 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" path="/var/lib/kubelet/pods/50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2/volumes" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.350123 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62584aaf-3f85-4383-8da4-ea3366edf67c" path="/var/lib/kubelet/pods/62584aaf-3f85-4383-8da4-ea3366edf67c/volumes" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.350662 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2nbmb"] Jan 27 14:32:33 crc kubenswrapper[4833]: E0127 14:32:33.351087 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" containerName="neutron-db-sync" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.351126 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" containerName="neutron-db-sync" Jan 27 14:32:33 crc kubenswrapper[4833]: E0127 14:32:33.351140 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.351146 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" Jan 27 14:32:33 crc kubenswrapper[4833]: E0127 14:32:33.351164 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="init" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.351170 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="init" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.351653 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.351681 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" containerName="neutron-db-sync" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.358879 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.370326 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2nbmb"] Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.457755 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb5l\" (UniqueName: \"kubernetes.io/projected/bb675f1d-22c9-4f48-a415-e6a5fc15f357-kube-api-access-fpb5l\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.458169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-config\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.458209 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.458247 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.458270 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.458299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.565112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-config\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.565182 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.565253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.565270 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.565311 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.566652 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.567071 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.567369 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.567774 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.567838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb5l\" (UniqueName: \"kubernetes.io/projected/bb675f1d-22c9-4f48-a415-e6a5fc15f357-kube-api-access-fpb5l\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.567868 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-config\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.588502 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cfd687f8-694fk"] Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.590288 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.594083 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.600462 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.600595 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.600710 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bthsg" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.613136 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb5l\" (UniqueName: \"kubernetes.io/projected/bb675f1d-22c9-4f48-a415-e6a5fc15f357-kube-api-access-fpb5l\") pod \"dnsmasq-dns-5ccc5c4795-2nbmb\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.641826 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cfd687f8-694fk"] Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.642471 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f64dd7dd-8w4dp" event={"ID":"17c47588-5dcf-4028-b0f7-b650ab0d4f4e","Type":"ContainerStarted","Data":"5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.642500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f64dd7dd-8w4dp" event={"ID":"17c47588-5dcf-4028-b0f7-b650ab0d4f4e","Type":"ContainerStarted","Data":"cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.642509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f64dd7dd-8w4dp" event={"ID":"17c47588-5dcf-4028-b0f7-b650ab0d4f4e","Type":"ContainerStarted","Data":"2267442ea6ef27ee3b646c627c6e82143545fde7070e073fb070cfad96cdc012"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.658675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc4a525b-9d53-4958-aa0a-4fb793ff8415","Type":"ContainerStarted","Data":"0904e88ca45b56c3cf92187d9255bb495faff6ccb2882e669684527b32fe1fde"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.670776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-ovndb-tls-certs\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.670827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvw4x\" (UniqueName: \"kubernetes.io/projected/5c40c21a-8ada-44db-9800-11aa5e084e66-kube-api-access-zvw4x\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.670871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-combined-ca-bundle\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.670920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-config\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.670942 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-httpd-config\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.673660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wchjj" event={"ID":"3df071dc-eb5b-40dd-85ea-430f44ab198f","Type":"ContainerStarted","Data":"cf0973c23f500358c49a8dadf69c6da79910b7909dc76dc24c87aa2b1df39b81"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.699215 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd9489696-52kzm" event={"ID":"534c5b75-240a-4ded-bb13-f05eb3158527","Type":"ContainerStarted","Data":"1b67b2f9d8abcebe9098ed72ef3e6132ee0c6bd0803a12fe596ff1ef3a7338b4"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.699256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd9489696-52kzm" event={"ID":"534c5b75-240a-4ded-bb13-f05eb3158527","Type":"ContainerStarted","Data":"059e40ab345dfd579c11810f98e1998e60dc6db93f775aa5b28c24179fa79226"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.699265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cd9489696-52kzm" event={"ID":"534c5b75-240a-4ded-bb13-f05eb3158527","Type":"ContainerStarted","Data":"55e07b9bc609ced7041166c50bbda34d391f10ea41003f06241dfa8d17a28c4c"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.703823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerStarted","Data":"6fb8d7492567691ac359c7e114c45a9d8a8c69ffb220260d95d5b78cf30eff46"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.712303 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.713002 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7795d84f4f-bz4f2" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon-log" containerID="cri-o://49b577c1fde8b5cbe8ecc68b5275570b3e97fba82792f204d3c4e448bf975eb4" gracePeriod=30 Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.713225 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7795d84f4f-bz4f2" event={"ID":"42b85b68-002e-4978-bc0c-311aa60f80fe","Type":"ContainerStarted","Data":"bb2885ee72f3d9527f85bbf5fbf469bcf45058d8227ea6e033e8e2bf2956395d"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.713261 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7795d84f4f-bz4f2" event={"ID":"42b85b68-002e-4978-bc0c-311aa60f80fe","Type":"ContainerStarted","Data":"49b577c1fde8b5cbe8ecc68b5275570b3e97fba82792f204d3c4e448bf975eb4"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.713285 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7795d84f4f-bz4f2" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon" containerID="cri-o://bb2885ee72f3d9527f85bbf5fbf469bcf45058d8227ea6e033e8e2bf2956395d" gracePeriod=30 Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.723078 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54f64dd7dd-8w4dp" podStartSLOduration=28.723045637 podStartE2EDuration="28.723045637s" podCreationTimestamp="2026-01-27 14:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:33.690668823 +0000 UTC m=+1255.341993225" watchObservedRunningTime="2026-01-27 14:32:33.723045637 +0000 UTC m=+1255.374370039" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.727007 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86px7" event={"ID":"5a57c811-cef6-458c-bb52-ef9e0861e39a","Type":"ContainerStarted","Data":"51fd440fc6d895911812687450be70452d3f42f42f6ab5fcf444d1029a81fbe7"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.727601 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86px7" event={"ID":"5a57c811-cef6-458c-bb52-ef9e0861e39a","Type":"ContainerStarted","Data":"52aee9b58558230c125b067b2e856b8f4aa4a4282fd66d32b8d84f70e964d41b"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.739613 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6dqbw" event={"ID":"a324f832-7082-443a-87c7-3cef46ebe7ea","Type":"ContainerStarted","Data":"71017171e2d70f8a3d9129c76f0441d479d83bb8220f8380b3b6cf9f10ec7a33"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.753021 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-wchjj" podStartSLOduration=3.310457509 podStartE2EDuration="1m9.753001317s" podCreationTimestamp="2026-01-27 14:31:24 +0000 UTC" firstStartedPulling="2026-01-27 14:31:25.809639862 +0000 UTC m=+1187.460964264" lastFinishedPulling="2026-01-27 14:32:32.25218367 +0000 UTC m=+1253.903508072" observedRunningTime="2026-01-27 14:32:33.720666193 +0000 UTC m=+1255.371990595" watchObservedRunningTime="2026-01-27 14:32:33.753001317 +0000 UTC m=+1255.404325729" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.758831 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7795d84f4f-bz4f2" podStartSLOduration=3.049576079 podStartE2EDuration="34.758806116s" podCreationTimestamp="2026-01-27 14:31:59 +0000 UTC" firstStartedPulling="2026-01-27 14:32:00.588182005 +0000 UTC m=+1222.239506407" lastFinishedPulling="2026-01-27 14:32:32.297412042 +0000 UTC m=+1253.948736444" observedRunningTime="2026-01-27 14:32:33.751396761 +0000 UTC m=+1255.402721163" watchObservedRunningTime="2026-01-27 14:32:33.758806116 +0000 UTC m=+1255.410130518" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.772424 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-ovndb-tls-certs\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.772493 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvw4x\" (UniqueName: \"kubernetes.io/projected/5c40c21a-8ada-44db-9800-11aa5e084e66-kube-api-access-zvw4x\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.772545 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-combined-ca-bundle\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.772618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-config\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.772672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-httpd-config\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.774030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea","Type":"ContainerStarted","Data":"8416b850635a61e613093133ae2ee6027e326b434584a1deafe100b569a63bfb"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.789148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7544d4446c-rj8rn" event={"ID":"0a306119-60bc-4953-ba3a-3e9a6ff99959","Type":"ContainerStarted","Data":"71c1c059727ffa2ea65f10c1821c6e6ffd3ed28cea15698be92ba762e5cefec0"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.789427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7544d4446c-rj8rn" event={"ID":"0a306119-60bc-4953-ba3a-3e9a6ff99959","Type":"ContainerStarted","Data":"5649838668c320f607064d2a42be88f546636ab9f5c01ac6f06dd56dacba639e"} Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.789312 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7544d4446c-rj8rn" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon-log" containerID="cri-o://5649838668c320f607064d2a42be88f546636ab9f5c01ac6f06dd56dacba639e" gracePeriod=30 Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.789571 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7544d4446c-rj8rn" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon" containerID="cri-o://71c1c059727ffa2ea65f10c1821c6e6ffd3ed28cea15698be92ba762e5cefec0" gracePeriod=30 Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.797284 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-combined-ca-bundle\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.803677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvw4x\" (UniqueName: \"kubernetes.io/projected/5c40c21a-8ada-44db-9800-11aa5e084e66-kube-api-access-zvw4x\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.810687 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-ovndb-tls-certs\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.812828 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-config\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.813519 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cd9489696-52kzm" podStartSLOduration=28.81350452 podStartE2EDuration="28.81350452s" podCreationTimestamp="2026-01-27 14:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:33.790638128 +0000 UTC m=+1255.441962530" watchObservedRunningTime="2026-01-27 14:32:33.81350452 +0000 UTC m=+1255.464828932" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.820122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-httpd-config\") pod \"neutron-cfd687f8-694fk\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.838512 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.848474 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6dqbw" podStartSLOduration=3.9453686340000003 podStartE2EDuration="36.848437351s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="2026-01-27 14:31:59.375249348 +0000 UTC m=+1221.026573750" lastFinishedPulling="2026-01-27 14:32:32.278318065 +0000 UTC m=+1253.929642467" observedRunningTime="2026-01-27 14:32:33.818653975 +0000 UTC m=+1255.469978377" watchObservedRunningTime="2026-01-27 14:32:33.848437351 +0000 UTC m=+1255.499761753" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.859111 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-86px7" podStartSLOduration=18.859089079 podStartE2EDuration="18.859089079s" podCreationTimestamp="2026-01-27 14:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:33.851579092 +0000 UTC m=+1255.502903504" watchObservedRunningTime="2026-01-27 14:32:33.859089079 +0000 UTC m=+1255.510413481" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.896861 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-pj4ff" podUID="50c6ecc5-6ef8-4032-9e62-8f0ad1ff86f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Jan 27 14:32:33 crc kubenswrapper[4833]: I0127 14:32:33.961806 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7544d4446c-rj8rn" podStartSLOduration=3.773771945 podStartE2EDuration="36.961786206s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="2026-01-27 14:31:58.843800571 +0000 UTC m=+1220.495124973" lastFinishedPulling="2026-01-27 14:32:32.031814802 +0000 UTC m=+1253.683139234" observedRunningTime="2026-01-27 14:32:33.89531328 +0000 UTC m=+1255.546637682" watchObservedRunningTime="2026-01-27 14:32:33.961786206 +0000 UTC m=+1255.613110608" Jan 27 14:32:34 crc kubenswrapper[4833]: I0127 14:32:34.481924 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2nbmb"] Jan 27 14:32:34 crc kubenswrapper[4833]: W0127 14:32:34.549708 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb675f1d_22c9_4f48_a415_e6a5fc15f357.slice/crio-6ea2167e7fb1ff2bb7968b8bacd4779478d9741a13f72f560284a9037fbb9ea9 WatchSource:0}: Error finding container 6ea2167e7fb1ff2bb7968b8bacd4779478d9741a13f72f560284a9037fbb9ea9: Status 404 returned error can't find the container with id 6ea2167e7fb1ff2bb7968b8bacd4779478d9741a13f72f560284a9037fbb9ea9 Jan 27 14:32:34 crc kubenswrapper[4833]: I0127 14:32:34.608659 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cfd687f8-694fk"] Jan 27 14:32:34 crc kubenswrapper[4833]: I0127 14:32:34.854642 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" event={"ID":"bb675f1d-22c9-4f48-a415-e6a5fc15f357","Type":"ContainerStarted","Data":"6ea2167e7fb1ff2bb7968b8bacd4779478d9741a13f72f560284a9037fbb9ea9"} Jan 27 14:32:34 crc kubenswrapper[4833]: I0127 14:32:34.867624 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cfd687f8-694fk" event={"ID":"5c40c21a-8ada-44db-9800-11aa5e084e66","Type":"ContainerStarted","Data":"267556364bb755eee88dafca08bb21cfc414c5ac1615bfa33bb85191b0cf586a"} Jan 27 14:32:34 crc kubenswrapper[4833]: I0127 14:32:34.879954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc4a525b-9d53-4958-aa0a-4fb793ff8415","Type":"ContainerStarted","Data":"340667f88427c71c4ac2fd229b288b420497a947646db21bdbda7c8715b1f72d"} Jan 27 14:32:34 crc kubenswrapper[4833]: I0127 14:32:34.917311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea","Type":"ContainerStarted","Data":"05ec16c057b19c8b72b672a13f455ed4de3412c7e31d16ddaee36d7a6e56f1bb"} Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.188323 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.188855 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.193605 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8684fb6757-ql55d"] Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.195789 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.199582 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.199683 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.215862 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8684fb6757-ql55d"] Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.263804 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.264727 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.365619 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-httpd-config\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.365689 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-config\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.365727 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-public-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.365763 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-ovndb-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.365788 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kftwq\" (UniqueName: \"kubernetes.io/projected/d622569d-4961-40a5-8bfc-0f08e9ed8b82-kube-api-access-kftwq\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.366021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-internal-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.366058 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-combined-ca-bundle\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.467915 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-httpd-config\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.467964 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-config\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.467995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-public-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.468017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-ovndb-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.468035 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kftwq\" (UniqueName: \"kubernetes.io/projected/d622569d-4961-40a5-8bfc-0f08e9ed8b82-kube-api-access-kftwq\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.468099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-internal-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.468117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-combined-ca-bundle\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.474733 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-ovndb-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.474842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-public-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.475093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-combined-ca-bundle\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.475266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-internal-tls-certs\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.476977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-httpd-config\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.479550 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-config\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.489331 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kftwq\" (UniqueName: \"kubernetes.io/projected/d622569d-4961-40a5-8bfc-0f08e9ed8b82-kube-api-access-kftwq\") pod \"neutron-8684fb6757-ql55d\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:36 crc kubenswrapper[4833]: I0127 14:32:36.578112 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:37 crc kubenswrapper[4833]: I0127 14:32:37.143303 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8684fb6757-ql55d"] Jan 27 14:32:37 crc kubenswrapper[4833]: W0127 14:32:37.147800 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd622569d_4961_40a5_8bfc_0f08e9ed8b82.slice/crio-d064bfb71e3964ebd719b533717ed5a221970616d9e2e33b7345f801b5b99f6f WatchSource:0}: Error finding container d064bfb71e3964ebd719b533717ed5a221970616d9e2e33b7345f801b5b99f6f: Status 404 returned error can't find the container with id d064bfb71e3964ebd719b533717ed5a221970616d9e2e33b7345f801b5b99f6f Jan 27 14:32:37 crc kubenswrapper[4833]: I0127 14:32:37.592890 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:32:37 crc kubenswrapper[4833]: I0127 14:32:37.957948 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8684fb6757-ql55d" event={"ID":"d622569d-4961-40a5-8bfc-0f08e9ed8b82","Type":"ContainerStarted","Data":"d064bfb71e3964ebd719b533717ed5a221970616d9e2e33b7345f801b5b99f6f"} Jan 27 14:32:39 crc kubenswrapper[4833]: I0127 14:32:39.578342 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:32:39 crc kubenswrapper[4833]: I0127 14:32:39.992924 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cfd687f8-694fk" event={"ID":"5c40c21a-8ada-44db-9800-11aa5e084e66","Type":"ContainerStarted","Data":"e78a3768176c6dddaef8c1235fae1d9dc998347e7ed3fed4a3fe7516863211dd"} Jan 27 14:32:39 crc kubenswrapper[4833]: I0127 14:32:39.995387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8684fb6757-ql55d" event={"ID":"d622569d-4961-40a5-8bfc-0f08e9ed8b82","Type":"ContainerStarted","Data":"021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082"} Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.008067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc4a525b-9d53-4958-aa0a-4fb793ff8415","Type":"ContainerStarted","Data":"d3edf5f1bde9eb4cb440c47c59c35fcc1c015d4009c9ddc31283018d7ee3b35a"} Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.008211 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-log" containerID="cri-o://340667f88427c71c4ac2fd229b288b420497a947646db21bdbda7c8715b1f72d" gracePeriod=30 Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.008321 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-httpd" containerID="cri-o://d3edf5f1bde9eb4cb440c47c59c35fcc1c015d4009c9ddc31283018d7ee3b35a" gracePeriod=30 Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.017898 4833 generic.go:334] "Generic (PLEG): container finished" podID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerID="eaf8b553436a80f57df8d411e5c4a1eb8a3f5204de56414425a0cb50d553de6e" exitCode=0 Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.017975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" event={"ID":"bb675f1d-22c9-4f48-a415-e6a5fc15f357","Type":"ContainerDied","Data":"eaf8b553436a80f57df8d411e5c4a1eb8a3f5204de56414425a0cb50d553de6e"} Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.022787 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea","Type":"ContainerStarted","Data":"157be186c655e14d04abc2641b1969aed463954e7ab0802d5c7ce62aad80e2c9"} Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.022975 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-log" containerID="cri-o://05ec16c057b19c8b72b672a13f455ed4de3412c7e31d16ddaee36d7a6e56f1bb" gracePeriod=30 Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.023061 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-httpd" containerID="cri-o://157be186c655e14d04abc2641b1969aed463954e7ab0802d5c7ce62aad80e2c9" gracePeriod=30 Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.032429 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=36.0324126 podStartE2EDuration="36.0324126s" podCreationTimestamp="2026-01-27 14:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:40.031685524 +0000 UTC m=+1261.683009946" watchObservedRunningTime="2026-01-27 14:32:40.0324126 +0000 UTC m=+1261.683737012" Jan 27 14:32:40 crc kubenswrapper[4833]: I0127 14:32:40.078939 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=36.07891858 podStartE2EDuration="36.07891858s" podCreationTimestamp="2026-01-27 14:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:40.075593106 +0000 UTC m=+1261.726917508" watchObservedRunningTime="2026-01-27 14:32:40.07891858 +0000 UTC m=+1261.730242982" Jan 27 14:32:41 crc kubenswrapper[4833]: I0127 14:32:41.063093 4833 generic.go:334] "Generic (PLEG): container finished" podID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerID="340667f88427c71c4ac2fd229b288b420497a947646db21bdbda7c8715b1f72d" exitCode=143 Jan 27 14:32:41 crc kubenswrapper[4833]: I0127 14:32:41.063393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc4a525b-9d53-4958-aa0a-4fb793ff8415","Type":"ContainerDied","Data":"340667f88427c71c4ac2fd229b288b420497a947646db21bdbda7c8715b1f72d"} Jan 27 14:32:41 crc kubenswrapper[4833]: I0127 14:32:41.071880 4833 generic.go:334] "Generic (PLEG): container finished" podID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerID="05ec16c057b19c8b72b672a13f455ed4de3412c7e31d16ddaee36d7a6e56f1bb" exitCode=143 Jan 27 14:32:41 crc kubenswrapper[4833]: I0127 14:32:41.071928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea","Type":"ContainerDied","Data":"05ec16c057b19c8b72b672a13f455ed4de3412c7e31d16ddaee36d7a6e56f1bb"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.086178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" event={"ID":"bb675f1d-22c9-4f48-a415-e6a5fc15f357","Type":"ContainerStarted","Data":"6a59a8400d9cf95ef6a90835c4e68cf530b5b6a6b6b404bd2f9aec4109b0546c"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.086670 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.097716 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cfd687f8-694fk" event={"ID":"5c40c21a-8ada-44db-9800-11aa5e084e66","Type":"ContainerStarted","Data":"e64d014885d6d0eed2ac7ee73fbdd5bc168b0b6eee3e8c66aa48696ba90b9572"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.098482 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.107597 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerStarted","Data":"a5a597e425bca6c647c9edc574b234b2e131cd0b46100565f0d6d9cf71113e04"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.112187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8684fb6757-ql55d" event={"ID":"d622569d-4961-40a5-8bfc-0f08e9ed8b82","Type":"ContainerStarted","Data":"9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.112482 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.117014 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" podStartSLOduration=9.117000023 podStartE2EDuration="9.117000023s" podCreationTimestamp="2026-01-27 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:42.102676303 +0000 UTC m=+1263.754000705" watchObservedRunningTime="2026-01-27 14:32:42.117000023 +0000 UTC m=+1263.768324425" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.122393 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cfd687f8-694fk" podStartSLOduration=9.122379584 podStartE2EDuration="9.122379584s" podCreationTimestamp="2026-01-27 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:42.120924271 +0000 UTC m=+1263.772248673" watchObservedRunningTime="2026-01-27 14:32:42.122379584 +0000 UTC m=+1263.773703986" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.123023 4833 generic.go:334] "Generic (PLEG): container finished" podID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerID="d3edf5f1bde9eb4cb440c47c59c35fcc1c015d4009c9ddc31283018d7ee3b35a" exitCode=0 Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.123505 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc4a525b-9d53-4958-aa0a-4fb793ff8415","Type":"ContainerDied","Data":"d3edf5f1bde9eb4cb440c47c59c35fcc1c015d4009c9ddc31283018d7ee3b35a"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.143550 4833 generic.go:334] "Generic (PLEG): container finished" podID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerID="157be186c655e14d04abc2641b1969aed463954e7ab0802d5c7ce62aad80e2c9" exitCode=0 Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.143589 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea","Type":"ContainerDied","Data":"157be186c655e14d04abc2641b1969aed463954e7ab0802d5c7ce62aad80e2c9"} Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.155648 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8684fb6757-ql55d" podStartSLOduration=6.155624247 podStartE2EDuration="6.155624247s" podCreationTimestamp="2026-01-27 14:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:42.147618838 +0000 UTC m=+1263.798943260" watchObservedRunningTime="2026-01-27 14:32:42.155624247 +0000 UTC m=+1263.806948649" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.377280 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.388923 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495624 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-logs\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495664 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-public-tls-certs\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495703 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-combined-ca-bundle\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-scripts\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495797 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495830 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-internal-tls-certs\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495911 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-logs\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495944 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-httpd-run\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495961 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.495986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-httpd-run\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496042 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-config-data\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-combined-ca-bundle\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496103 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-scripts\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496129 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtvrw\" (UniqueName: \"kubernetes.io/projected/bc4a525b-9d53-4958-aa0a-4fb793ff8415-kube-api-access-gtvrw\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496159 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-config-data\") pod \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\" (UID: \"bc4a525b-9d53-4958-aa0a-4fb793ff8415\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496192 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj8sw\" (UniqueName: \"kubernetes.io/projected/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-kube-api-access-zj8sw\") pod \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\" (UID: \"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea\") " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496153 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-logs" (OuterVolumeSpecName: "logs") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496529 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.496557 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-logs" (OuterVolumeSpecName: "logs") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.503277 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.505617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-scripts" (OuterVolumeSpecName: "scripts") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.506551 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4a525b-9d53-4958-aa0a-4fb793ff8415-kube-api-access-gtvrw" (OuterVolumeSpecName: "kube-api-access-gtvrw") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "kube-api-access-gtvrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.507000 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.512607 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-scripts" (OuterVolumeSpecName: "scripts") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.526906 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-kube-api-access-zj8sw" (OuterVolumeSpecName: "kube-api-access-zj8sw") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "kube-api-access-zj8sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.539552 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.550065 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.561565 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.595611 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-config-data" (OuterVolumeSpecName: "config-data") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599012 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599049 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599064 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599086 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599098 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc4a525b-9d53-4958-aa0a-4fb793ff8415-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599111 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599123 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599136 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599147 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtvrw\" (UniqueName: \"kubernetes.io/projected/bc4a525b-9d53-4958-aa0a-4fb793ff8415-kube-api-access-gtvrw\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599159 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj8sw\" (UniqueName: \"kubernetes.io/projected/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-kube-api-access-zj8sw\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599170 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599182 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599194 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.599204 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.628638 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.630205 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.635544 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" (UID: "bd2c88dd-5f9c-412c-bbcc-a3587750e0ea"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.637411 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-config-data" (OuterVolumeSpecName: "config-data") pod "bc4a525b-9d53-4958-aa0a-4fb793ff8415" (UID: "bc4a525b-9d53-4958-aa0a-4fb793ff8415"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.701643 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.701673 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.701698 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:42 crc kubenswrapper[4833]: I0127 14:32:42.701706 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4a525b-9d53-4958-aa0a-4fb793ff8415-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.157934 4833 generic.go:334] "Generic (PLEG): container finished" podID="5a57c811-cef6-458c-bb52-ef9e0861e39a" containerID="51fd440fc6d895911812687450be70452d3f42f42f6ab5fcf444d1029a81fbe7" exitCode=0 Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.158020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86px7" event={"ID":"5a57c811-cef6-458c-bb52-ef9e0861e39a","Type":"ContainerDied","Data":"51fd440fc6d895911812687450be70452d3f42f42f6ab5fcf444d1029a81fbe7"} Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.168352 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc4a525b-9d53-4958-aa0a-4fb793ff8415","Type":"ContainerDied","Data":"0904e88ca45b56c3cf92187d9255bb495faff6ccb2882e669684527b32fe1fde"} Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.168405 4833 scope.go:117] "RemoveContainer" containerID="d3edf5f1bde9eb4cb440c47c59c35fcc1c015d4009c9ddc31283018d7ee3b35a" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.168575 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.196479 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bd2c88dd-5f9c-412c-bbcc-a3587750e0ea","Type":"ContainerDied","Data":"8416b850635a61e613093133ae2ee6027e326b434584a1deafe100b569a63bfb"} Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.196659 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.203142 4833 generic.go:334] "Generic (PLEG): container finished" podID="3df071dc-eb5b-40dd-85ea-430f44ab198f" containerID="cf0973c23f500358c49a8dadf69c6da79910b7909dc76dc24c87aa2b1df39b81" exitCode=0 Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.203387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wchjj" event={"ID":"3df071dc-eb5b-40dd-85ea-430f44ab198f","Type":"ContainerDied","Data":"cf0973c23f500358c49a8dadf69c6da79910b7909dc76dc24c87aa2b1df39b81"} Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.208092 4833 generic.go:334] "Generic (PLEG): container finished" podID="a324f832-7082-443a-87c7-3cef46ebe7ea" containerID="71017171e2d70f8a3d9129c76f0441d479d83bb8220f8380b3b6cf9f10ec7a33" exitCode=0 Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.209675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6dqbw" event={"ID":"a324f832-7082-443a-87c7-3cef46ebe7ea","Type":"ContainerDied","Data":"71017171e2d70f8a3d9129c76f0441d479d83bb8220f8380b3b6cf9f10ec7a33"} Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.284176 4833 scope.go:117] "RemoveContainer" containerID="340667f88427c71c4ac2fd229b288b420497a947646db21bdbda7c8715b1f72d" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.304940 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.310785 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.327557 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: E0127 14:32:43.327926 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-log" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.327941 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-log" Jan 27 14:32:43 crc kubenswrapper[4833]: E0127 14:32:43.327962 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-httpd" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.327967 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-httpd" Jan 27 14:32:43 crc kubenswrapper[4833]: E0127 14:32:43.327991 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-httpd" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.327997 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-httpd" Jan 27 14:32:43 crc kubenswrapper[4833]: E0127 14:32:43.328015 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-log" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.328020 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-log" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.328174 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-httpd" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.328194 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-httpd" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.328210 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" containerName="glance-log" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.328220 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" containerName="glance-log" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.329104 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.336457 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.341825 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.342041 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.341825 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7fbqk" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.346880 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.364761 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.374016 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.377817 4833 scope.go:117] "RemoveContainer" containerID="157be186c655e14d04abc2641b1969aed463954e7ab0802d5c7ce62aad80e2c9" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.386411 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.392389 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.395486 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.395806 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.396995 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.404977 4833 scope.go:117] "RemoveContainer" containerID="05ec16c057b19c8b72b672a13f455ed4de3412c7e31d16ddaee36d7a6e56f1bb" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.420417 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.420983 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.421277 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.421416 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-logs\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.421569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.421726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7krm\" (UniqueName: \"kubernetes.io/projected/ae6adef2-48b4-4342-a13e-6b2541eeeff1-kube-api-access-t7krm\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.421879 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.422726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524288 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524362 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bclgn\" (UniqueName: \"kubernetes.io/projected/d4c3b8c8-1c2e-435d-8380-1374792be064-kube-api-access-bclgn\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524383 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-logs\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524407 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7krm\" (UniqueName: \"kubernetes.io/projected/ae6adef2-48b4-4342-a13e-6b2541eeeff1-kube-api-access-t7krm\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.524623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.525882 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-logs\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.525991 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526012 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526046 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526139 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526171 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526216 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.526262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.527215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.527681 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.533171 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.533847 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.539541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.546841 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.555967 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7krm\" (UniqueName: \"kubernetes.io/projected/ae6adef2-48b4-4342-a13e-6b2541eeeff1-kube-api-access-t7krm\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.565690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.627884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.627926 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bclgn\" (UniqueName: \"kubernetes.io/projected/d4c3b8c8-1c2e-435d-8380-1374792be064-kube-api-access-bclgn\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.628006 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.628077 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.628164 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.628195 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.628218 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.628252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.629609 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.632111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-logs\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.632256 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.634304 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.635128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.635843 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.646815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bclgn\" (UniqueName: \"kubernetes.io/projected/d4c3b8c8-1c2e-435d-8380-1374792be064-kube-api-access-bclgn\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.647420 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.666617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.684575 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:32:43 crc kubenswrapper[4833]: I0127 14:32:43.711852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.237248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cjff8" event={"ID":"32678252-925f-4f5c-9602-5409032b6063","Type":"ContainerStarted","Data":"ee90a0f8f7ea56c5be43e04a44222b8834b06f54acba33a13be9e121129d6afe"} Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.262351 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cjff8" podStartSLOduration=2.872217795 podStartE2EDuration="47.262326175s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="2026-01-27 14:31:59.369856389 +0000 UTC m=+1221.021180791" lastFinishedPulling="2026-01-27 14:32:43.759964769 +0000 UTC m=+1265.411289171" observedRunningTime="2026-01-27 14:32:44.256587436 +0000 UTC m=+1265.907911838" watchObservedRunningTime="2026-01-27 14:32:44.262326175 +0000 UTC m=+1265.913650577" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.355255 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.729456 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.757274 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6dqbw" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.775470 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wchjj" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.849173 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-combined-ca-bundle\") pod \"a324f832-7082-443a-87c7-3cef46ebe7ea\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.850611 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twnhf\" (UniqueName: \"kubernetes.io/projected/3df071dc-eb5b-40dd-85ea-430f44ab198f-kube-api-access-twnhf\") pod \"3df071dc-eb5b-40dd-85ea-430f44ab198f\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858331 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-config-data\") pod \"a324f832-7082-443a-87c7-3cef46ebe7ea\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt2mx\" (UniqueName: \"kubernetes.io/projected/5a57c811-cef6-458c-bb52-ef9e0861e39a-kube-api-access-mt2mx\") pod \"5a57c811-cef6-458c-bb52-ef9e0861e39a\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnbb4\" (UniqueName: \"kubernetes.io/projected/a324f832-7082-443a-87c7-3cef46ebe7ea-kube-api-access-pnbb4\") pod \"a324f832-7082-443a-87c7-3cef46ebe7ea\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858486 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a324f832-7082-443a-87c7-3cef46ebe7ea-logs\") pod \"a324f832-7082-443a-87c7-3cef46ebe7ea\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-combined-ca-bundle\") pod \"5a57c811-cef6-458c-bb52-ef9e0861e39a\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858534 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-scripts\") pod \"5a57c811-cef6-458c-bb52-ef9e0861e39a\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858715 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-db-sync-config-data\") pod \"3df071dc-eb5b-40dd-85ea-430f44ab198f\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858742 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-fernet-keys\") pod \"5a57c811-cef6-458c-bb52-ef9e0861e39a\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858775 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-config-data\") pod \"3df071dc-eb5b-40dd-85ea-430f44ab198f\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858809 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-scripts\") pod \"a324f832-7082-443a-87c7-3cef46ebe7ea\" (UID: \"a324f832-7082-443a-87c7-3cef46ebe7ea\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858849 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-credential-keys\") pod \"5a57c811-cef6-458c-bb52-ef9e0861e39a\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858874 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-combined-ca-bundle\") pod \"3df071dc-eb5b-40dd-85ea-430f44ab198f\" (UID: \"3df071dc-eb5b-40dd-85ea-430f44ab198f\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.858893 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-config-data\") pod \"5a57c811-cef6-458c-bb52-ef9e0861e39a\" (UID: \"5a57c811-cef6-458c-bb52-ef9e0861e39a\") " Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.859245 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a324f832-7082-443a-87c7-3cef46ebe7ea-logs" (OuterVolumeSpecName: "logs") pod "a324f832-7082-443a-87c7-3cef46ebe7ea" (UID: "a324f832-7082-443a-87c7-3cef46ebe7ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.859576 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a324f832-7082-443a-87c7-3cef46ebe7ea-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.862148 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df071dc-eb5b-40dd-85ea-430f44ab198f-kube-api-access-twnhf" (OuterVolumeSpecName: "kube-api-access-twnhf") pod "3df071dc-eb5b-40dd-85ea-430f44ab198f" (UID: "3df071dc-eb5b-40dd-85ea-430f44ab198f"). InnerVolumeSpecName "kube-api-access-twnhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.862796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a57c811-cef6-458c-bb52-ef9e0861e39a-kube-api-access-mt2mx" (OuterVolumeSpecName: "kube-api-access-mt2mx") pod "5a57c811-cef6-458c-bb52-ef9e0861e39a" (UID: "5a57c811-cef6-458c-bb52-ef9e0861e39a"). InnerVolumeSpecName "kube-api-access-mt2mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.864500 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3df071dc-eb5b-40dd-85ea-430f44ab198f" (UID: "3df071dc-eb5b-40dd-85ea-430f44ab198f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.865796 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a324f832-7082-443a-87c7-3cef46ebe7ea-kube-api-access-pnbb4" (OuterVolumeSpecName: "kube-api-access-pnbb4") pod "a324f832-7082-443a-87c7-3cef46ebe7ea" (UID: "a324f832-7082-443a-87c7-3cef46ebe7ea"). InnerVolumeSpecName "kube-api-access-pnbb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.868393 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5a57c811-cef6-458c-bb52-ef9e0861e39a" (UID: "5a57c811-cef6-458c-bb52-ef9e0861e39a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.877999 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-scripts" (OuterVolumeSpecName: "scripts") pod "a324f832-7082-443a-87c7-3cef46ebe7ea" (UID: "a324f832-7082-443a-87c7-3cef46ebe7ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.882769 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5a57c811-cef6-458c-bb52-ef9e0861e39a" (UID: "5a57c811-cef6-458c-bb52-ef9e0861e39a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.883054 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-scripts" (OuterVolumeSpecName: "scripts") pod "5a57c811-cef6-458c-bb52-ef9e0861e39a" (UID: "5a57c811-cef6-458c-bb52-ef9e0861e39a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.902429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a57c811-cef6-458c-bb52-ef9e0861e39a" (UID: "5a57c811-cef6-458c-bb52-ef9e0861e39a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.903923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-config-data" (OuterVolumeSpecName: "config-data") pod "5a57c811-cef6-458c-bb52-ef9e0861e39a" (UID: "5a57c811-cef6-458c-bb52-ef9e0861e39a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.916072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-config-data" (OuterVolumeSpecName: "config-data") pod "a324f832-7082-443a-87c7-3cef46ebe7ea" (UID: "a324f832-7082-443a-87c7-3cef46ebe7ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.917523 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a324f832-7082-443a-87c7-3cef46ebe7ea" (UID: "a324f832-7082-443a-87c7-3cef46ebe7ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.920786 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3df071dc-eb5b-40dd-85ea-430f44ab198f" (UID: "3df071dc-eb5b-40dd-85ea-430f44ab198f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.965177 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-config-data" (OuterVolumeSpecName: "config-data") pod "3df071dc-eb5b-40dd-85ea-430f44ab198f" (UID: "3df071dc-eb5b-40dd-85ea-430f44ab198f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.966877 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt2mx\" (UniqueName: \"kubernetes.io/projected/5a57c811-cef6-458c-bb52-ef9e0861e39a-kube-api-access-mt2mx\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968170 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnbb4\" (UniqueName: \"kubernetes.io/projected/a324f832-7082-443a-87c7-3cef46ebe7ea-kube-api-access-pnbb4\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968256 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968314 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968367 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968417 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968485 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968556 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968625 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968690 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df071dc-eb5b-40dd-85ea-430f44ab198f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968751 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a57c811-cef6-458c-bb52-ef9e0861e39a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968812 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968862 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twnhf\" (UniqueName: \"kubernetes.io/projected/3df071dc-eb5b-40dd-85ea-430f44ab198f-kube-api-access-twnhf\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.968910 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a324f832-7082-443a-87c7-3cef46ebe7ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:44 crc kubenswrapper[4833]: I0127 14:32:44.975343 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.247945 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4a525b-9d53-4958-aa0a-4fb793ff8415" path="/var/lib/kubelet/pods/bc4a525b-9d53-4958-aa0a-4fb793ff8415/volumes" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.249030 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2c88dd-5f9c-412c-bbcc-a3587750e0ea" path="/var/lib/kubelet/pods/bd2c88dd-5f9c-412c-bbcc-a3587750e0ea/volumes" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.288515 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-68475b756f-rzm75"] Jan 27 14:32:45 crc kubenswrapper[4833]: E0127 14:32:45.288917 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a324f832-7082-443a-87c7-3cef46ebe7ea" containerName="placement-db-sync" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.288934 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a324f832-7082-443a-87c7-3cef46ebe7ea" containerName="placement-db-sync" Jan 27 14:32:45 crc kubenswrapper[4833]: E0127 14:32:45.288960 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df071dc-eb5b-40dd-85ea-430f44ab198f" containerName="watcher-db-sync" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.288967 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df071dc-eb5b-40dd-85ea-430f44ab198f" containerName="watcher-db-sync" Jan 27 14:32:45 crc kubenswrapper[4833]: E0127 14:32:45.288980 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a57c811-cef6-458c-bb52-ef9e0861e39a" containerName="keystone-bootstrap" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.288987 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a57c811-cef6-458c-bb52-ef9e0861e39a" containerName="keystone-bootstrap" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.289149 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a57c811-cef6-458c-bb52-ef9e0861e39a" containerName="keystone-bootstrap" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.289178 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df071dc-eb5b-40dd-85ea-430f44ab198f" containerName="watcher-db-sync" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.289188 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a324f832-7082-443a-87c7-3cef46ebe7ea" containerName="placement-db-sync" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.289783 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.300961 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.301187 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.336501 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68475b756f-rzm75"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.360186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4c3b8c8-1c2e-435d-8380-1374792be064","Type":"ContainerStarted","Data":"ece419e3fa97f5b7a5a8c286d0f8a968801ef13946a830982216463a38a7e43a"} Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.369826 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wchjj" event={"ID":"3df071dc-eb5b-40dd-85ea-430f44ab198f","Type":"ContainerDied","Data":"f26ab8c81dd269e293670db877255e28bbfdd9cc7897be3d8946215669a43bcb"} Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.369872 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26ab8c81dd269e293670db877255e28bbfdd9cc7897be3d8946215669a43bcb" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.369951 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wchjj" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.395794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-fernet-keys\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.395859 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-config-data\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.395898 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-internal-tls-certs\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.395929 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-scripts\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.395968 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfvr\" (UniqueName: \"kubernetes.io/projected/fef33f59-fb9d-49e0-b9fb-70636656f7c7-kube-api-access-ndfvr\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.396006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-public-tls-certs\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.396059 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-combined-ca-bundle\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.396117 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-credential-keys\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.396231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae6adef2-48b4-4342-a13e-6b2541eeeff1","Type":"ContainerStarted","Data":"5fb719ed1f26e8b15b731bd9effac6db8d4bb0c7a686eb5e9afaad1f272286e5"} Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.396294 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae6adef2-48b4-4342-a13e-6b2541eeeff1","Type":"ContainerStarted","Data":"7fe6c40ebcc01e5254c1042555b14a2f960cad9846877f482ccd0223f4b004e5"} Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.399403 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6dqbw" event={"ID":"a324f832-7082-443a-87c7-3cef46ebe7ea","Type":"ContainerDied","Data":"12c15c044b482b0bc12c89a70a6a62c9a1a5e758c2fd02eb439e78f257f545a4"} Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.399468 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c15c044b482b0bc12c89a70a6a62c9a1a5e758c2fd02eb439e78f257f545a4" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.399541 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6dqbw" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.427707 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-86px7" event={"ID":"5a57c811-cef6-458c-bb52-ef9e0861e39a","Type":"ContainerDied","Data":"52aee9b58558230c125b067b2e856b8f4aa4a4282fd66d32b8d84f70e964d41b"} Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.427749 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52aee9b58558230c125b067b2e856b8f4aa4a4282fd66d32b8d84f70e964d41b" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.427839 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-86px7" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.435302 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cd7556dcb-c7h4r"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.438683 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.449061 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.449369 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6nldk" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.449800 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.449957 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.450107 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.503498 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cd7556dcb-c7h4r"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504388 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-fernet-keys\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-config-data\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504464 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-internal-tls-certs\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-scripts\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504510 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfvr\" (UniqueName: \"kubernetes.io/projected/fef33f59-fb9d-49e0-b9fb-70636656f7c7-kube-api-access-ndfvr\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504537 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-public-tls-certs\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504570 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-combined-ca-bundle\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.504611 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-credential-keys\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.514091 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-scripts\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.514522 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-internal-tls-certs\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.518102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-config-data\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.521786 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-public-tls-certs\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.528064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-combined-ca-bundle\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.531810 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-credential-keys\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.542838 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fef33f59-fb9d-49e0-b9fb-70636656f7c7-fernet-keys\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.560672 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.561903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.574368 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfvr\" (UniqueName: \"kubernetes.io/projected/fef33f59-fb9d-49e0-b9fb-70636656f7c7-kube-api-access-ndfvr\") pod \"keystone-68475b756f-rzm75\" (UID: \"fef33f59-fb9d-49e0-b9fb-70636656f7c7\") " pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.584197 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.584458 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-ldtld" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-combined-ca-bundle\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605713 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-internal-tls-certs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605772 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-scripts\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605826 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbp6\" (UniqueName: \"kubernetes.io/projected/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-kube-api-access-sqbp6\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-config-data\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605879 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-public-tls-certs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.605908 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-logs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.608369 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.687107 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.691016 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.700530 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.700758 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.712504 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.714038 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.714973 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-scripts\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715059 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbp6\" (UniqueName: \"kubernetes.io/projected/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-kube-api-access-sqbp6\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715119 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-config-data\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-logs\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715177 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-public-tls-certs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715198 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h65n\" (UniqueName: \"kubernetes.io/projected/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-kube-api-access-9h65n\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-logs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715252 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-combined-ca-bundle\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.715266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-internal-tls-certs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.721221 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.722108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-logs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.738704 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-combined-ca-bundle\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.738948 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.741290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-public-tls-certs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.743972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-internal-tls-certs\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.757793 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbp6\" (UniqueName: \"kubernetes.io/projected/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-kube-api-access-sqbp6\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.757934 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-scripts\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.765123 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2a96c2-908d-48f8-97c7-bc4a59f1caff-config-data\") pod \"placement-cd7556dcb-c7h4r\" (UID: \"cc2a96c2-908d-48f8-97c7-bc4a59f1caff\") " pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.807527 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cfd687f8-694fk"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.807740 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cfd687f8-694fk" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-api" containerID="cri-o://e78a3768176c6dddaef8c1235fae1d9dc998347e7ed3fed4a3fe7516863211dd" gracePeriod=30 Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.808138 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cfd687f8-694fk" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-httpd" containerID="cri-o://e64d014885d6d0eed2ac7ee73fbdd5bc168b0b6eee3e8c66aa48696ba90b9572" gracePeriod=30 Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.819417 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.829950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h65n\" (UniqueName: \"kubernetes.io/projected/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-kube-api-access-9h65n\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.830168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.830306 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm78p\" (UniqueName: \"kubernetes.io/projected/9caaeec5-a0d5-4c51-a290-cd283cc9497a-kube-api-access-wm78p\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-config-data\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835468 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfld\" (UniqueName: \"kubernetes.io/projected/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-kube-api-access-ljfld\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835806 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.835911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-config-data\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.836032 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-logs\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.836120 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-logs\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.836210 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9caaeec5-a0d5-4c51-a290-cd283cc9497a-logs\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.838478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-logs\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.845079 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-config-data\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.845558 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.852012 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.855695 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h65n\" (UniqueName: \"kubernetes.io/projected/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-kube-api-access-9h65n\") pod \"watcher-decision-engine-0\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.907622 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bdcd59f97-6hlst"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.910211 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.921871 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bdcd59f97-6hlst"] Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.938239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.938532 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.938877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm78p\" (UniqueName: \"kubernetes.io/projected/9caaeec5-a0d5-4c51-a290-cd283cc9497a-kube-api-access-wm78p\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.938920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-config-data\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.938951 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfld\" (UniqueName: \"kubernetes.io/projected/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-kube-api-access-ljfld\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.939050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-config-data\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.939098 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-logs\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.939120 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9caaeec5-a0d5-4c51-a290-cd283cc9497a-logs\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.939155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.940802 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-logs\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.943635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9caaeec5-a0d5-4c51-a290-cd283cc9497a-logs\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.952212 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.952572 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.952577 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-config-data\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.955914 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-config-data\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.957236 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.959422 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm78p\" (UniqueName: \"kubernetes.io/projected/9caaeec5-a0d5-4c51-a290-cd283cc9497a-kube-api-access-wm78p\") pod \"watcher-api-0\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " pod="openstack/watcher-api-0" Jan 27 14:32:45 crc kubenswrapper[4833]: I0127 14:32:45.966407 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfld\" (UniqueName: \"kubernetes.io/projected/5265f8e8-b9f5-471d-876e-3d8ebe3ec895-kube-api-access-ljfld\") pod \"watcher-applier-0\" (UID: \"5265f8e8-b9f5-471d-876e-3d8ebe3ec895\") " pod="openstack/watcher-applier-0" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.051539 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-httpd-config\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.051750 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-ovndb-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.051982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nwf6\" (UniqueName: \"kubernetes.io/projected/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-kube-api-access-8nwf6\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.052061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-combined-ca-bundle\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.052220 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-public-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.052272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-config\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.052353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-internal-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.157474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nwf6\" (UniqueName: \"kubernetes.io/projected/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-kube-api-access-8nwf6\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.157599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-combined-ca-bundle\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.158268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-public-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.158335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-config\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.158411 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-internal-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.158745 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-httpd-config\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.159515 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-ovndb-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.162422 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-httpd-config\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.163038 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-combined-ca-bundle\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.165072 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-config\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.165634 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-ovndb-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.166642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-public-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.166775 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-internal-tls-certs\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.176394 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nwf6\" (UniqueName: \"kubernetes.io/projected/0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5-kube-api-access-8nwf6\") pod \"neutron-6bdcd59f97-6hlst\" (UID: \"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5\") " pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.190100 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.265805 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cd9489696-52kzm" podUID="534c5b75-240a-4ded-bb13-f05eb3158527" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.426093 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.432394 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.442743 4833 generic.go:334] "Generic (PLEG): container finished" podID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerID="e64d014885d6d0eed2ac7ee73fbdd5bc168b0b6eee3e8c66aa48696ba90b9572" exitCode=0 Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.442770 4833 generic.go:334] "Generic (PLEG): container finished" podID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerID="e78a3768176c6dddaef8c1235fae1d9dc998347e7ed3fed4a3fe7516863211dd" exitCode=0 Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.442788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cfd687f8-694fk" event={"ID":"5c40c21a-8ada-44db-9800-11aa5e084e66","Type":"ContainerDied","Data":"e64d014885d6d0eed2ac7ee73fbdd5bc168b0b6eee3e8c66aa48696ba90b9572"} Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.442811 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cfd687f8-694fk" event={"ID":"5c40c21a-8ada-44db-9800-11aa5e084e66","Type":"ContainerDied","Data":"e78a3768176c6dddaef8c1235fae1d9dc998347e7ed3fed4a3fe7516863211dd"} Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.446682 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.481102 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.493257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:32:46 crc kubenswrapper[4833]: I0127 14:32:46.504402 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 14:32:47 crc kubenswrapper[4833]: I0127 14:32:47.464735 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ht8p9" event={"ID":"a93af5fb-6812-4f30-9e89-e8c58b01a69e","Type":"ContainerStarted","Data":"52fe7c1211d96fe83b1cb5a61cdf3c7f30d0eca60aa629b378e35821a1184556"} Jan 27 14:32:47 crc kubenswrapper[4833]: I0127 14:32:47.466819 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4c3b8c8-1c2e-435d-8380-1374792be064","Type":"ContainerStarted","Data":"9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3"} Jan 27 14:32:47 crc kubenswrapper[4833]: I0127 14:32:47.469894 4833 generic.go:334] "Generic (PLEG): container finished" podID="32678252-925f-4f5c-9602-5409032b6063" containerID="ee90a0f8f7ea56c5be43e04a44222b8834b06f54acba33a13be9e121129d6afe" exitCode=0 Jan 27 14:32:47 crc kubenswrapper[4833]: I0127 14:32:47.469953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cjff8" event={"ID":"32678252-925f-4f5c-9602-5409032b6063","Type":"ContainerDied","Data":"ee90a0f8f7ea56c5be43e04a44222b8834b06f54acba33a13be9e121129d6afe"} Jan 27 14:32:47 crc kubenswrapper[4833]: I0127 14:32:47.488184 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ht8p9" podStartSLOduration=4.460174872 podStartE2EDuration="50.488164653s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="2026-01-27 14:31:58.714533724 +0000 UTC m=+1220.365858126" lastFinishedPulling="2026-01-27 14:32:44.742523505 +0000 UTC m=+1266.393847907" observedRunningTime="2026-01-27 14:32:47.481701829 +0000 UTC m=+1269.133026231" watchObservedRunningTime="2026-01-27 14:32:47.488164653 +0000 UTC m=+1269.139489055" Jan 27 14:32:48 crc kubenswrapper[4833]: I0127 14:32:48.717677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:32:48 crc kubenswrapper[4833]: I0127 14:32:48.788652 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vsdp5"] Jan 27 14:32:48 crc kubenswrapper[4833]: I0127 14:32:48.789013 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" podUID="10bb060d-4709-441d-859b-65bf70812174" containerName="dnsmasq-dns" containerID="cri-o://c19944e21f04bb148433f38c6652ada961744c5791d797dee80faccce8c4c7dc" gracePeriod=10 Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.204550 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.345775 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-httpd-config\") pod \"5c40c21a-8ada-44db-9800-11aa5e084e66\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.345875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvw4x\" (UniqueName: \"kubernetes.io/projected/5c40c21a-8ada-44db-9800-11aa5e084e66-kube-api-access-zvw4x\") pod \"5c40c21a-8ada-44db-9800-11aa5e084e66\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.345907 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-ovndb-tls-certs\") pod \"5c40c21a-8ada-44db-9800-11aa5e084e66\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.345975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-config\") pod \"5c40c21a-8ada-44db-9800-11aa5e084e66\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.346045 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-combined-ca-bundle\") pod \"5c40c21a-8ada-44db-9800-11aa5e084e66\" (UID: \"5c40c21a-8ada-44db-9800-11aa5e084e66\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.380612 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c40c21a-8ada-44db-9800-11aa5e084e66-kube-api-access-zvw4x" (OuterVolumeSpecName: "kube-api-access-zvw4x") pod "5c40c21a-8ada-44db-9800-11aa5e084e66" (UID: "5c40c21a-8ada-44db-9800-11aa5e084e66"). InnerVolumeSpecName "kube-api-access-zvw4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.394254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5c40c21a-8ada-44db-9800-11aa5e084e66" (UID: "5c40c21a-8ada-44db-9800-11aa5e084e66"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.450936 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.450964 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvw4x\" (UniqueName: \"kubernetes.io/projected/5c40c21a-8ada-44db-9800-11aa5e084e66-kube-api-access-zvw4x\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.491550 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-config" (OuterVolumeSpecName: "config") pod "5c40c21a-8ada-44db-9800-11aa5e084e66" (UID: "5c40c21a-8ada-44db-9800-11aa5e084e66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.522311 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cfd687f8-694fk" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.544516 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5c40c21a-8ada-44db-9800-11aa5e084e66" (UID: "5c40c21a-8ada-44db-9800-11aa5e084e66"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.547723 4833 generic.go:334] "Generic (PLEG): container finished" podID="10bb060d-4709-441d-859b-65bf70812174" containerID="c19944e21f04bb148433f38c6652ada961744c5791d797dee80faccce8c4c7dc" exitCode=0 Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.555527 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.555555 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.613838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cfd687f8-694fk" event={"ID":"5c40c21a-8ada-44db-9800-11aa5e084e66","Type":"ContainerDied","Data":"267556364bb755eee88dafca08bb21cfc414c5ac1615bfa33bb85191b0cf586a"} Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.613872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cjff8" event={"ID":"32678252-925f-4f5c-9602-5409032b6063","Type":"ContainerDied","Data":"a72ac9feae6c29dfb12671b2399b0d8173d85d4dd1d97318a997e2de637052de"} Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.613890 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a72ac9feae6c29dfb12671b2399b0d8173d85d4dd1d97318a997e2de637052de" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.613900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" event={"ID":"10bb060d-4709-441d-859b-65bf70812174","Type":"ContainerDied","Data":"c19944e21f04bb148433f38c6652ada961744c5791d797dee80faccce8c4c7dc"} Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.613925 4833 scope.go:117] "RemoveContainer" containerID="e64d014885d6d0eed2ac7ee73fbdd5bc168b0b6eee3e8c66aa48696ba90b9572" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.615037 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c40c21a-8ada-44db-9800-11aa5e084e66" (UID: "5c40c21a-8ada-44db-9800-11aa5e084e66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.636685 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cjff8" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.662038 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c40c21a-8ada-44db-9800-11aa5e084e66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.699068 4833 scope.go:117] "RemoveContainer" containerID="e78a3768176c6dddaef8c1235fae1d9dc998347e7ed3fed4a3fe7516863211dd" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.762815 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rbgg\" (UniqueName: \"kubernetes.io/projected/32678252-925f-4f5c-9602-5409032b6063-kube-api-access-4rbgg\") pod \"32678252-925f-4f5c-9602-5409032b6063\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.762907 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-combined-ca-bundle\") pod \"32678252-925f-4f5c-9602-5409032b6063\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.763059 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-db-sync-config-data\") pod \"32678252-925f-4f5c-9602-5409032b6063\" (UID: \"32678252-925f-4f5c-9602-5409032b6063\") " Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.766786 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-68475b756f-rzm75"] Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.778009 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32678252-925f-4f5c-9602-5409032b6063-kube-api-access-4rbgg" (OuterVolumeSpecName: "kube-api-access-4rbgg") pod "32678252-925f-4f5c-9602-5409032b6063" (UID: "32678252-925f-4f5c-9602-5409032b6063"). InnerVolumeSpecName "kube-api-access-4rbgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.792403 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "32678252-925f-4f5c-9602-5409032b6063" (UID: "32678252-925f-4f5c-9602-5409032b6063"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.852767 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32678252-925f-4f5c-9602-5409032b6063" (UID: "32678252-925f-4f5c-9602-5409032b6063"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.867145 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rbgg\" (UniqueName: \"kubernetes.io/projected/32678252-925f-4f5c-9602-5409032b6063-kube-api-access-4rbgg\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.867185 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.867199 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/32678252-925f-4f5c-9602-5409032b6063-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.869935 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bdcd59f97-6hlst"] Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.939296 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cfd687f8-694fk"] Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.958500 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cfd687f8-694fk"] Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.971056 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cd7556dcb-c7h4r"] Jan 27 14:32:49 crc kubenswrapper[4833]: I0127 14:32:49.992904 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 14:32:50 crc kubenswrapper[4833]: W0127 14:32:50.021802 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e84c4a_5a0c_473c_bc1f_ed52ac0994bb.slice/crio-2d9e6dfa820afc4d5196f5a6830590ed86fb5a2bc72e17312a32fc69a92e38eb WatchSource:0}: Error finding container 2d9e6dfa820afc4d5196f5a6830590ed86fb5a2bc72e17312a32fc69a92e38eb: Status 404 returned error can't find the container with id 2d9e6dfa820afc4d5196f5a6830590ed86fb5a2bc72e17312a32fc69a92e38eb Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.030502 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.060938 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.343709 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.388206 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-swift-storage-0\") pod \"10bb060d-4709-441d-859b-65bf70812174\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.388299 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-sb\") pod \"10bb060d-4709-441d-859b-65bf70812174\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.388330 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-config\") pod \"10bb060d-4709-441d-859b-65bf70812174\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.388365 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-svc\") pod \"10bb060d-4709-441d-859b-65bf70812174\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.388427 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-nb\") pod \"10bb060d-4709-441d-859b-65bf70812174\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.388530 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxdjq\" (UniqueName: \"kubernetes.io/projected/10bb060d-4709-441d-859b-65bf70812174-kube-api-access-dxdjq\") pod \"10bb060d-4709-441d-859b-65bf70812174\" (UID: \"10bb060d-4709-441d-859b-65bf70812174\") " Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.438054 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10bb060d-4709-441d-859b-65bf70812174-kube-api-access-dxdjq" (OuterVolumeSpecName: "kube-api-access-dxdjq") pod "10bb060d-4709-441d-859b-65bf70812174" (UID: "10bb060d-4709-441d-859b-65bf70812174"). InnerVolumeSpecName "kube-api-access-dxdjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.497182 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxdjq\" (UniqueName: \"kubernetes.io/projected/10bb060d-4709-441d-859b-65bf70812174-kube-api-access-dxdjq\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.581244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "10bb060d-4709-441d-859b-65bf70812174" (UID: "10bb060d-4709-441d-859b-65bf70812174"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.591464 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-config" (OuterVolumeSpecName: "config") pod "10bb060d-4709-441d-859b-65bf70812174" (UID: "10bb060d-4709-441d-859b-65bf70812174"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.598805 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.598855 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.598922 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "10bb060d-4709-441d-859b-65bf70812174" (UID: "10bb060d-4709-441d-859b-65bf70812174"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.610963 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "10bb060d-4709-441d-859b-65bf70812174" (UID: "10bb060d-4709-441d-859b-65bf70812174"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.612904 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "10bb060d-4709-441d-859b-65bf70812174" (UID: "10bb060d-4709-441d-859b-65bf70812174"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.664405 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdcd59f97-6hlst" event={"ID":"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5","Type":"ContainerStarted","Data":"d2dc558f99d3fa11b62b35c25d18f478e8cb1f117424fc28dd6dfd1186c7bc1e"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.664473 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdcd59f97-6hlst" event={"ID":"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5","Type":"ContainerStarted","Data":"0b2ecb5fc77eb7d249c334310e745c37b54d932012fcfe231bd8c071124bf2ca"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.682132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb","Type":"ContainerStarted","Data":"2d9e6dfa820afc4d5196f5a6830590ed86fb5a2bc72e17312a32fc69a92e38eb"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.688865 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5265f8e8-b9f5-471d-876e-3d8ebe3ec895","Type":"ContainerStarted","Data":"5ec5efc6c96901cb2ee453c2ae9542cad85624a0bb069f02f065881729c9f0d0"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.700279 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.700305 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.700314 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10bb060d-4709-441d-859b-65bf70812174-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.722696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerStarted","Data":"b862939c604b2b62cbc437e7a9e228f10942d870fe6751c9dc9519d134664b8e"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.731551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae6adef2-48b4-4342-a13e-6b2541eeeff1","Type":"ContainerStarted","Data":"3a9fdb203e2d259b5c986b5aab84427317433c3a5996445f1eaabb1b237fb87d"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.764741 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.764721025 podStartE2EDuration="7.764721025s" podCreationTimestamp="2026-01-27 14:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:50.755806826 +0000 UTC m=+1272.407131248" watchObservedRunningTime="2026-01-27 14:32:50.764721025 +0000 UTC m=+1272.416045427" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.781837 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9caaeec5-a0d5-4c51-a290-cd283cc9497a","Type":"ContainerStarted","Data":"d09306b948a3dea406b170ebf4bf87308a06cf665822c68ba136f6fd7ba86402"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.783804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd7556dcb-c7h4r" event={"ID":"cc2a96c2-908d-48f8-97c7-bc4a59f1caff","Type":"ContainerStarted","Data":"2447bcbcf8600ba9d73befc479e62003cdc92a3e49b12f7d45fe701870547694"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.853909 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68475b756f-rzm75" event={"ID":"fef33f59-fb9d-49e0-b9fb-70636656f7c7","Type":"ContainerStarted","Data":"b6a1802f78b1d2861dbf845d9e576ca7d3aed63452008eff5fd76d854bd26415"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.853948 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-68475b756f-rzm75" event={"ID":"fef33f59-fb9d-49e0-b9fb-70636656f7c7","Type":"ContainerStarted","Data":"1c8a8f858b426230d3e0838e06dc18ead98e2ab9c772eb59b85b27e9a4bf67ac"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.854186 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.893771 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4c3b8c8-1c2e-435d-8380-1374792be064","Type":"ContainerStarted","Data":"ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.929346 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-694b98c787-zwc44"] Jan 27 14:32:50 crc kubenswrapper[4833]: E0127 14:32:50.929960 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10bb060d-4709-441d-859b-65bf70812174" containerName="init" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.929985 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="10bb060d-4709-441d-859b-65bf70812174" containerName="init" Jan 27 14:32:50 crc kubenswrapper[4833]: E0127 14:32:50.930014 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10bb060d-4709-441d-859b-65bf70812174" containerName="dnsmasq-dns" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930022 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="10bb060d-4709-441d-859b-65bf70812174" containerName="dnsmasq-dns" Jan 27 14:32:50 crc kubenswrapper[4833]: E0127 14:32:50.930035 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-httpd" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930044 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-httpd" Jan 27 14:32:50 crc kubenswrapper[4833]: E0127 14:32:50.930058 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32678252-925f-4f5c-9602-5409032b6063" containerName="barbican-db-sync" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930066 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="32678252-925f-4f5c-9602-5409032b6063" containerName="barbican-db-sync" Jan 27 14:32:50 crc kubenswrapper[4833]: E0127 14:32:50.930087 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-api" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930094 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-api" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930322 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-httpd" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930348 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="32678252-925f-4f5c-9602-5409032b6063" containerName="barbican-db-sync" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930361 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" containerName="neutron-api" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.930381 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="10bb060d-4709-441d-859b-65bf70812174" containerName="dnsmasq-dns" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.936696 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.946244 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.946837 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cjff8" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.947218 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" event={"ID":"10bb060d-4709-441d-859b-65bf70812174","Type":"ContainerDied","Data":"7a881e9e3a7bba67db442acb977c2c117f151409087056e1aada6699b883c8a9"} Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.947260 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-vsdp5" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.947266 4833 scope.go:117] "RemoveContainer" containerID="c19944e21f04bb148433f38c6652ada961744c5791d797dee80faccce8c4c7dc" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.988626 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86457ddd7d-rqrvt"] Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.990221 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:50 crc kubenswrapper[4833]: I0127 14:32:50.992903 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.019698 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-config-data\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.019816 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-config-data-custom\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.019856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-combined-ca-bundle\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.019892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6666db7-ad38-4773-bc1f-35667d8ea76b-logs\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.019913 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx94m\" (UniqueName: \"kubernetes.io/projected/e6666db7-ad38-4773-bc1f-35667d8ea76b-kube-api-access-jx94m\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.033027 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-68475b756f-rzm75" podStartSLOduration=6.033007396 podStartE2EDuration="6.033007396s" podCreationTimestamp="2026-01-27 14:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:50.926724779 +0000 UTC m=+1272.578049181" watchObservedRunningTime="2026-01-27 14:32:51.033007396 +0000 UTC m=+1272.684331798" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.072931 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-694b98c787-zwc44"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.087358 4833 scope.go:117] "RemoveContainer" containerID="00f617fca33466c92a5979ead869e755bb22c1d5c358e90e133531bce2ecbf47" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.088782 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86457ddd7d-rqrvt"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.100566 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.100547446 podStartE2EDuration="8.100547446s" podCreationTimestamp="2026-01-27 14:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:50.956615067 +0000 UTC m=+1272.607939489" watchObservedRunningTime="2026-01-27 14:32:51.100547446 +0000 UTC m=+1272.751871848" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-combined-ca-bundle\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-combined-ca-bundle\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-config-data-custom\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6666db7-ad38-4773-bc1f-35667d8ea76b-logs\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx94m\" (UniqueName: \"kubernetes.io/projected/e6666db7-ad38-4773-bc1f-35667d8ea76b-kube-api-access-jx94m\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-config-data\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-logs\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123493 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-config-data-custom\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123535 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-config-data\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.123553 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbk6f\" (UniqueName: \"kubernetes.io/projected/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-kube-api-access-hbk6f\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.124918 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6666db7-ad38-4773-bc1f-35667d8ea76b-logs\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.129130 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-c66rd"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.130739 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.131588 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-config-data-custom\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.131966 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-config-data\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.137959 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6666db7-ad38-4773-bc1f-35667d8ea76b-combined-ca-bundle\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.150969 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-c66rd"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.166338 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx94m\" (UniqueName: \"kubernetes.io/projected/e6666db7-ad38-4773-bc1f-35667d8ea76b-kube-api-access-jx94m\") pod \"barbican-worker-694b98c787-zwc44\" (UID: \"e6666db7-ad38-4773-bc1f-35667d8ea76b\") " pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.222687 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c40c21a-8ada-44db-9800-11aa5e084e66" path="/var/lib/kubelet/pods/5c40c21a-8ada-44db-9800-11aa5e084e66/volumes" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.223265 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vsdp5"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225144 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-config-data\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225181 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbk6f\" (UniqueName: \"kubernetes.io/projected/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-kube-api-access-hbk6f\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-combined-ca-bundle\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225255 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-config-data-custom\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225285 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-svc\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225337 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225369 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-config\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-logs\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.225470 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqt4\" (UniqueName: \"kubernetes.io/projected/c5ee6173-1d4d-47da-b5cb-e3d711df9826-kube-api-access-smqt4\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.229495 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-config-data\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.230273 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-logs\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.234090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-config-data-custom\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.241464 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-vsdp5"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.251385 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbk6f\" (UniqueName: \"kubernetes.io/projected/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-kube-api-access-hbk6f\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.253611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7786a9e5-715f-4bd8-9eb9-4bd199e72b92-combined-ca-bundle\") pod \"barbican-keystone-listener-86457ddd7d-rqrvt\" (UID: \"7786a9e5-715f-4bd8-9eb9-4bd199e72b92\") " pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.287971 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c8b977f46-tpb5g"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.335138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.337664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.339736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-svc\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.339801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.339845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.339891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.339917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-config\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.340011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqt4\" (UniqueName: \"kubernetes.io/projected/c5ee6173-1d4d-47da-b5cb-e3d711df9826-kube-api-access-smqt4\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.340864 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-svc\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.340960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.341578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.341678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-config\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.342280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.367133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqt4\" (UniqueName: \"kubernetes.io/projected/c5ee6173-1d4d-47da-b5cb-e3d711df9826-kube-api-access-smqt4\") pod \"dnsmasq-dns-688c87cc99-c66rd\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.375660 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-694b98c787-zwc44" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.381096 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.413674 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c8b977f46-tpb5g"] Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.442469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qtzj\" (UniqueName: \"kubernetes.io/projected/796c7603-c98d-4dd0-b104-9197e5074655-kube-api-access-2qtzj\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.442559 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data-custom\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.442762 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-combined-ca-bundle\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.442822 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.442979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796c7603-c98d-4dd0-b104-9197e5074655-logs\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.470957 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.545170 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-combined-ca-bundle\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.545429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.545532 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796c7603-c98d-4dd0-b104-9197e5074655-logs\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.545564 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qtzj\" (UniqueName: \"kubernetes.io/projected/796c7603-c98d-4dd0-b104-9197e5074655-kube-api-access-2qtzj\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.545582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data-custom\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.547959 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796c7603-c98d-4dd0-b104-9197e5074655-logs\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.556128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-combined-ca-bundle\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.557076 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data-custom\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.575134 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qtzj\" (UniqueName: \"kubernetes.io/projected/796c7603-c98d-4dd0-b104-9197e5074655-kube-api-access-2qtzj\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.591261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data\") pod \"barbican-api-6c8b977f46-tpb5g\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:51 crc kubenswrapper[4833]: I0127 14:32:51.636940 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.037021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9caaeec5-a0d5-4c51-a290-cd283cc9497a","Type":"ContainerStarted","Data":"b1a8396f658088cab73bfc6bf953fb7bb567049c58d162bb1b6534c0ab3a5b37"} Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.053697 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bdcd59f97-6hlst" event={"ID":"0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5","Type":"ContainerStarted","Data":"a5188e13276b2abbf488e65c7ca03ee295b2e6cc16e7241daec342611922e666"} Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.054590 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.096653 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd7556dcb-c7h4r" event={"ID":"cc2a96c2-908d-48f8-97c7-bc4a59f1caff","Type":"ContainerStarted","Data":"95fd6cd3ea5564231f3821fba8e55b0eceb79923cdf5285dfd49388115255bbe"} Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.096687 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cd7556dcb-c7h4r" event={"ID":"cc2a96c2-908d-48f8-97c7-bc4a59f1caff","Type":"ContainerStarted","Data":"45ae045773d0e5da1bdc01a5e3435e54f1da5cf7eb3b3934e2f066a90cc62e2f"} Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.096703 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.096713 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.132625 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cd7556dcb-c7h4r" podStartSLOduration=7.132600709 podStartE2EDuration="7.132600709s" podCreationTimestamp="2026-01-27 14:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:52.129253104 +0000 UTC m=+1273.780577516" watchObservedRunningTime="2026-01-27 14:32:52.132600709 +0000 UTC m=+1273.783925111" Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.137486 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bdcd59f97-6hlst" podStartSLOduration=7.1374338569999995 podStartE2EDuration="7.137433857s" podCreationTimestamp="2026-01-27 14:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:52.088718078 +0000 UTC m=+1273.740042480" watchObservedRunningTime="2026-01-27 14:32:52.137433857 +0000 UTC m=+1273.788758259" Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.477509 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86457ddd7d-rqrvt"] Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.489845 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-694b98c787-zwc44"] Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.564838 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c8b977f46-tpb5g"] Jan 27 14:32:52 crc kubenswrapper[4833]: I0127 14:32:52.761645 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-c66rd"] Jan 27 14:32:53 crc kubenswrapper[4833]: W0127 14:32:53.082436 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7786a9e5_715f_4bd8_9eb9_4bd199e72b92.slice/crio-eed999f8ffee0797d5770bc2f9985a7fa58a54e77d8816b1c7f7fd3522c78aff WatchSource:0}: Error finding container eed999f8ffee0797d5770bc2f9985a7fa58a54e77d8816b1c7f7fd3522c78aff: Status 404 returned error can't find the container with id eed999f8ffee0797d5770bc2f9985a7fa58a54e77d8816b1c7f7fd3522c78aff Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.149421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9caaeec5-a0d5-4c51-a290-cd283cc9497a","Type":"ContainerStarted","Data":"ec6c682cdca46ca81fcaf15731717ba8c6653c36692aa4f6d664417a07efca94"} Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.150754 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.159630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-694b98c787-zwc44" event={"ID":"e6666db7-ad38-4773-bc1f-35667d8ea76b","Type":"ContainerStarted","Data":"cebe15536acda2294824ce440d9afd20d5d85f59d56fd407abb91a5f0f36f38c"} Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.165838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" event={"ID":"7786a9e5-715f-4bd8-9eb9-4bd199e72b92","Type":"ContainerStarted","Data":"eed999f8ffee0797d5770bc2f9985a7fa58a54e77d8816b1c7f7fd3522c78aff"} Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.167456 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" event={"ID":"c5ee6173-1d4d-47da-b5cb-e3d711df9826","Type":"ContainerStarted","Data":"64fe49a94e96163574e70f6190b700056bad4f63c30885f0e74a681fc0f4812b"} Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.169306 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c8b977f46-tpb5g" event={"ID":"796c7603-c98d-4dd0-b104-9197e5074655","Type":"ContainerStarted","Data":"209a5302094ca57e72f96b61dc124ffbc11f9cd75f3b15e26cbb9c10bc6675f5"} Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.179128 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=8.179112845 podStartE2EDuration="8.179112845s" podCreationTimestamp="2026-01-27 14:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:53.174965542 +0000 UTC m=+1274.826289944" watchObservedRunningTime="2026-01-27 14:32:53.179112845 +0000 UTC m=+1274.830437247" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.233557 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10bb060d-4709-441d-859b-65bf70812174" path="/var/lib/kubelet/pods/10bb060d-4709-441d-859b-65bf70812174/volumes" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.685060 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.685102 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.712931 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.712989 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.733707 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.776720 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.788955 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:53 crc kubenswrapper[4833]: I0127 14:32:53.799801 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.185876 4833 generic.go:334] "Generic (PLEG): container finished" podID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" containerID="52fe7c1211d96fe83b1cb5a61cdf3c7f30d0eca60aa629b378e35821a1184556" exitCode=0 Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.185990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ht8p9" event={"ID":"a93af5fb-6812-4f30-9e89-e8c58b01a69e","Type":"ContainerDied","Data":"52fe7c1211d96fe83b1cb5a61cdf3c7f30d0eca60aa629b378e35821a1184556"} Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.188005 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.188024 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.188036 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.188048 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.647650 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-689fdb796b-5m2hw"] Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.649393 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.652840 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.654936 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.661410 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-689fdb796b-5m2hw"] Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-internal-tls-certs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-config-data-custom\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732300 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-public-tls-certs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732326 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx268\" (UniqueName: \"kubernetes.io/projected/8f7651db-4775-4828-a4ff-923c39645dd0-kube-api-access-kx268\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732352 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-config-data\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-combined-ca-bundle\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.732483 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f7651db-4775-4828-a4ff-923c39645dd0-logs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834108 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f7651db-4775-4828-a4ff-923c39645dd0-logs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-internal-tls-certs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834232 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-config-data-custom\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-public-tls-certs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx268\" (UniqueName: \"kubernetes.io/projected/8f7651db-4775-4828-a4ff-923c39645dd0-kube-api-access-kx268\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834314 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-config-data\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.834385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-combined-ca-bundle\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.838946 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-internal-tls-certs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.839191 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f7651db-4775-4828-a4ff-923c39645dd0-logs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.843190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-combined-ca-bundle\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.846979 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-public-tls-certs\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.851615 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-config-data\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.852055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f7651db-4775-4828-a4ff-923c39645dd0-config-data-custom\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.855982 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx268\" (UniqueName: \"kubernetes.io/projected/8f7651db-4775-4828-a4ff-923c39645dd0-kube-api-access-kx268\") pod \"barbican-api-689fdb796b-5m2hw\" (UID: \"8f7651db-4775-4828-a4ff-923c39645dd0\") " pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:54 crc kubenswrapper[4833]: I0127 14:32:54.979818 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.252993 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"5265f8e8-b9f5-471d-876e-3d8ebe3ec895","Type":"ContainerStarted","Data":"e1106b3e6547632ec3a360c29e188aa0d169fec45173bcd7e2aa10709d7616d3"} Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.267310 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c8b977f46-tpb5g" event={"ID":"796c7603-c98d-4dd0-b104-9197e5074655","Type":"ContainerStarted","Data":"e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056"} Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.267621 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c8b977f46-tpb5g" event={"ID":"796c7603-c98d-4dd0-b104-9197e5074655","Type":"ContainerStarted","Data":"d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523"} Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.268719 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.268749 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.273930 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.334805136 podStartE2EDuration="10.273912507s" podCreationTimestamp="2026-01-27 14:32:45 +0000 UTC" firstStartedPulling="2026-01-27 14:32:50.076143165 +0000 UTC m=+1271.727467577" lastFinishedPulling="2026-01-27 14:32:54.015250546 +0000 UTC m=+1275.666574948" observedRunningTime="2026-01-27 14:32:55.261347026 +0000 UTC m=+1276.912671428" watchObservedRunningTime="2026-01-27 14:32:55.273912507 +0000 UTC m=+1276.925236909" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.309200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb","Type":"ContainerStarted","Data":"586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571"} Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.328858 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c8b977f46-tpb5g" podStartSLOduration=4.328834145 podStartE2EDuration="4.328834145s" podCreationTimestamp="2026-01-27 14:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:55.291833718 +0000 UTC m=+1276.943158120" watchObservedRunningTime="2026-01-27 14:32:55.328834145 +0000 UTC m=+1276.980158547" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.329974 4833 generic.go:334] "Generic (PLEG): container finished" podID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerID="fede047154900321fd4690719fe4d8480a7448a91d0ec0acb85d113666f26a18" exitCode=0 Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.330890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" event={"ID":"c5ee6173-1d4d-47da-b5cb-e3d711df9826","Type":"ContainerDied","Data":"fede047154900321fd4690719fe4d8480a7448a91d0ec0acb85d113666f26a18"} Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.331836 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.373655 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=6.426562128 podStartE2EDuration="10.373632557s" podCreationTimestamp="2026-01-27 14:32:45 +0000 UTC" firstStartedPulling="2026-01-27 14:32:50.076555624 +0000 UTC m=+1271.727880026" lastFinishedPulling="2026-01-27 14:32:54.023626043 +0000 UTC m=+1275.674950455" observedRunningTime="2026-01-27 14:32:55.338121513 +0000 UTC m=+1276.989445915" watchObservedRunningTime="2026-01-27 14:32:55.373632557 +0000 UTC m=+1277.024956959" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.633845 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-689fdb796b-5m2hw"] Jan 27 14:32:55 crc kubenswrapper[4833]: W0127 14:32:55.658987 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f7651db_4775_4828_a4ff_923c39645dd0.slice/crio-47d728e4dc1ab4c8f753373bc0d57c120e9eac3963c83bfa0b336eac965b75b8 WatchSource:0}: Error finding container 47d728e4dc1ab4c8f753373bc0d57c120e9eac3963c83bfa0b336eac965b75b8: Status 404 returned error can't find the container with id 47d728e4dc1ab4c8f753373bc0d57c120e9eac3963c83bfa0b336eac965b75b8 Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.773284 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93af5fb-6812-4f30-9e89-e8c58b01a69e-etc-machine-id\") pod \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866143 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-combined-ca-bundle\") pod \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866171 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93af5fb-6812-4f30-9e89-e8c58b01a69e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a93af5fb-6812-4f30-9e89-e8c58b01a69e" (UID: "a93af5fb-6812-4f30-9e89-e8c58b01a69e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866193 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdhpw\" (UniqueName: \"kubernetes.io/projected/a93af5fb-6812-4f30-9e89-e8c58b01a69e-kube-api-access-pdhpw\") pod \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866341 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-db-sync-config-data\") pod \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866397 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-scripts\") pod \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.866419 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-config-data\") pod \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\" (UID: \"a93af5fb-6812-4f30-9e89-e8c58b01a69e\") " Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.867087 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93af5fb-6812-4f30-9e89-e8c58b01a69e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.874165 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a93af5fb-6812-4f30-9e89-e8c58b01a69e" (UID: "a93af5fb-6812-4f30-9e89-e8c58b01a69e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.881639 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93af5fb-6812-4f30-9e89-e8c58b01a69e-kube-api-access-pdhpw" (OuterVolumeSpecName: "kube-api-access-pdhpw") pod "a93af5fb-6812-4f30-9e89-e8c58b01a69e" (UID: "a93af5fb-6812-4f30-9e89-e8c58b01a69e"). InnerVolumeSpecName "kube-api-access-pdhpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.881746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-scripts" (OuterVolumeSpecName: "scripts") pod "a93af5fb-6812-4f30-9e89-e8c58b01a69e" (UID: "a93af5fb-6812-4f30-9e89-e8c58b01a69e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.920659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a93af5fb-6812-4f30-9e89-e8c58b01a69e" (UID: "a93af5fb-6812-4f30-9e89-e8c58b01a69e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.949041 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-config-data" (OuterVolumeSpecName: "config-data") pod "a93af5fb-6812-4f30-9e89-e8c58b01a69e" (UID: "a93af5fb-6812-4f30-9e89-e8c58b01a69e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.968124 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.968149 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.968158 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.968168 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdhpw\" (UniqueName: \"kubernetes.io/projected/a93af5fb-6812-4f30-9e89-e8c58b01a69e-kube-api-access-pdhpw\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:55 crc kubenswrapper[4833]: I0127 14:32:55.968178 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a93af5fb-6812-4f30-9e89-e8c58b01a69e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.188435 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.267141 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cd9489696-52kzm" podUID="534c5b75-240a-4ded-bb13-f05eb3158527" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.163:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.163:8443: connect: connection refused" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.285756 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.396911 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" event={"ID":"c5ee6173-1d4d-47da-b5cb-e3d711df9826","Type":"ContainerStarted","Data":"3a6f539013658362ca9b3c57b2dd59d5c31a2de25f02633c66022f05af4d940a"} Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.397199 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.402640 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689fdb796b-5m2hw" event={"ID":"8f7651db-4775-4828-a4ff-923c39645dd0","Type":"ContainerStarted","Data":"b21b7a52d0abb052c9f9b176928c5cba8738ac7ae6ba17a06a842a849682616b"} Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.402681 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689fdb796b-5m2hw" event={"ID":"8f7651db-4775-4828-a4ff-923c39645dd0","Type":"ContainerStarted","Data":"47d728e4dc1ab4c8f753373bc0d57c120e9eac3963c83bfa0b336eac965b75b8"} Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.406566 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.407249 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.408483 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ht8p9" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.408567 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ht8p9" event={"ID":"a93af5fb-6812-4f30-9e89-e8c58b01a69e","Type":"ContainerDied","Data":"7f82e4a710f88a817189a5c011c2652f98bc5756b9332faba015371cd9604d90"} Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.408849 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f82e4a710f88a817189a5c011c2652f98bc5756b9332faba015371cd9604d90" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.432425 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" podStartSLOduration=6.432404537 podStartE2EDuration="6.432404537s" podCreationTimestamp="2026-01-27 14:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:56.425905692 +0000 UTC m=+1278.077230094" watchObservedRunningTime="2026-01-27 14:32:56.432404537 +0000 UTC m=+1278.083728939" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.451224 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.502330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.502368 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.512122 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.512290 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.523789 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.534665 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:32:56 crc kubenswrapper[4833]: E0127 14:32:56.535207 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" containerName="cinder-db-sync" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.535270 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" containerName="cinder-db-sync" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.535536 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" containerName="cinder-db-sync" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.563665 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.607793 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9v4gg" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.615397 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.619814 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.624238 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.626389 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.634001 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.686828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.686915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.686974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dqcv\" (UniqueName: \"kubernetes.io/projected/ff624954-c6d6-4f1d-8b66-52573bddb34e-kube-api-access-8dqcv\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.687002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.687055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.687144 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff624954-c6d6-4f1d-8b66-52573bddb34e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.701293 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.772575 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-c66rd"] Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.797610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.798188 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.798266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dqcv\" (UniqueName: \"kubernetes.io/projected/ff624954-c6d6-4f1d-8b66-52573bddb34e-kube-api-access-8dqcv\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.798297 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.798346 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.798433 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff624954-c6d6-4f1d-8b66-52573bddb34e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.798619 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff624954-c6d6-4f1d-8b66-52573bddb34e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.820716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dqcv\" (UniqueName: \"kubernetes.io/projected/ff624954-c6d6-4f1d-8b66-52573bddb34e-kube-api-access-8dqcv\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.839856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.837411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.840403 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.840587 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.927980 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.933498 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-4cgcq"] Jan 27 14:32:56 crc kubenswrapper[4833]: I0127 14:32:56.935542 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.014087 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-4cgcq"] Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.028358 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.030115 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.036825 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.046586 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.114605 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zk6h\" (UniqueName: \"kubernetes.io/projected/5959a39f-6b69-4f81-9cb8-541268073335-kube-api-access-2zk6h\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.114677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.114725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.114751 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.114876 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.114910 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-config\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217503 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-config\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217579 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217625 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zk6h\" (UniqueName: \"kubernetes.io/projected/5959a39f-6b69-4f81-9cb8-541268073335-kube-api-access-2zk6h\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data-custom\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217671 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217714 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217751 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c56w\" (UniqueName: \"kubernetes.io/projected/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-kube-api-access-4c56w\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-scripts\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217854 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.217884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-logs\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.218756 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-config\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.219638 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.220189 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.221119 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.221250 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.242663 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zk6h\" (UniqueName: \"kubernetes.io/projected/5959a39f-6b69-4f81-9cb8-541268073335-kube-api-access-2zk6h\") pod \"dnsmasq-dns-6bb4fc677f-4cgcq\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.306195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.320693 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.320793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data-custom\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.320871 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.320926 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c56w\" (UniqueName: \"kubernetes.io/projected/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-kube-api-access-4c56w\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.320979 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-scripts\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.320999 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.321013 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-logs\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.321392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-logs\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.321456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.330624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data-custom\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.331682 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.345255 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.352941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c56w\" (UniqueName: \"kubernetes.io/projected/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-kube-api-access-4c56w\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.356798 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-scripts\") pod \"cinder-api-0\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.367014 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.425461 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.453348 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.463814 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 27 14:32:57 crc kubenswrapper[4833]: I0127 14:32:57.476803 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 14:32:58 crc kubenswrapper[4833]: I0127 14:32:58.432551 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerName="dnsmasq-dns" containerID="cri-o://3a6f539013658362ca9b3c57b2dd59d5c31a2de25f02633c66022f05af4d940a" gracePeriod=10 Jan 27 14:32:58 crc kubenswrapper[4833]: I0127 14:32:58.967662 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:32:58 crc kubenswrapper[4833]: I0127 14:32:58.968251 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.446798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-689fdb796b-5m2hw" event={"ID":"8f7651db-4775-4828-a4ff-923c39645dd0","Type":"ContainerStarted","Data":"ccbbfbea4bf4a79559995d625ab162634878a9c60540640e00760135797fe749"} Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.448738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.448768 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.452544 4833 generic.go:334] "Generic (PLEG): container finished" podID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerID="3a6f539013658362ca9b3c57b2dd59d5c31a2de25f02633c66022f05af4d940a" exitCode=0 Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.452704 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" event={"ID":"c5ee6173-1d4d-47da-b5cb-e3d711df9826","Type":"ContainerDied","Data":"3a6f539013658362ca9b3c57b2dd59d5c31a2de25f02633c66022f05af4d940a"} Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.470187 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.470609 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.471909 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.494437 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-689fdb796b-5m2hw" podStartSLOduration=5.494415781 podStartE2EDuration="5.494415781s" podCreationTimestamp="2026-01-27 14:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:59.482031784 +0000 UTC m=+1281.133356196" watchObservedRunningTime="2026-01-27 14:32:59.494415781 +0000 UTC m=+1281.145740203" Jan 27 14:32:59 crc kubenswrapper[4833]: I0127 14:32:59.795618 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.279960 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.795658 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.907778 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-nb\") pod \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.907877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-svc\") pod \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.907909 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-swift-storage-0\") pod \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.907964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-sb\") pod \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.908062 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-config\") pod \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.908081 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smqt4\" (UniqueName: \"kubernetes.io/projected/c5ee6173-1d4d-47da-b5cb-e3d711df9826-kube-api-access-smqt4\") pod \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\" (UID: \"c5ee6173-1d4d-47da-b5cb-e3d711df9826\") " Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.920904 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ee6173-1d4d-47da-b5cb-e3d711df9826-kube-api-access-smqt4" (OuterVolumeSpecName: "kube-api-access-smqt4") pod "c5ee6173-1d4d-47da-b5cb-e3d711df9826" (UID: "c5ee6173-1d4d-47da-b5cb-e3d711df9826"). InnerVolumeSpecName "kube-api-access-smqt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:00 crc kubenswrapper[4833]: I0127 14:33:00.963340 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5ee6173-1d4d-47da-b5cb-e3d711df9826" (UID: "c5ee6173-1d4d-47da-b5cb-e3d711df9826"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.011772 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.011960 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smqt4\" (UniqueName: \"kubernetes.io/projected/c5ee6173-1d4d-47da-b5cb-e3d711df9826-kube-api-access-smqt4\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.027178 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5ee6173-1d4d-47da-b5cb-e3d711df9826" (UID: "c5ee6173-1d4d-47da-b5cb-e3d711df9826"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.072788 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5ee6173-1d4d-47da-b5cb-e3d711df9826" (UID: "c5ee6173-1d4d-47da-b5cb-e3d711df9826"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.101920 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-config" (OuterVolumeSpecName: "config") pod "c5ee6173-1d4d-47da-b5cb-e3d711df9826" (UID: "c5ee6173-1d4d-47da-b5cb-e3d711df9826"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.112832 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5ee6173-1d4d-47da-b5cb-e3d711df9826" (UID: "c5ee6173-1d4d-47da-b5cb-e3d711df9826"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.114303 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.114346 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.114363 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.114378 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5ee6173-1d4d-47da-b5cb-e3d711df9826-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.162649 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.477184 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" event={"ID":"c5ee6173-1d4d-47da-b5cb-e3d711df9826","Type":"ContainerDied","Data":"64fe49a94e96163574e70f6190b700056bad4f63c30885f0e74a681fc0f4812b"} Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.477235 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-c66rd" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.477589 4833 scope.go:117] "RemoveContainer" containerID="3a6f539013658362ca9b3c57b2dd59d5c31a2de25f02633c66022f05af4d940a" Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.500512 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-c66rd"] Jan 27 14:33:01 crc kubenswrapper[4833]: I0127 14:33:01.508614 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-c66rd"] Jan 27 14:33:02 crc kubenswrapper[4833]: I0127 14:33:02.657005 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:33:03 crc kubenswrapper[4833]: I0127 14:33:03.238744 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" path="/var/lib/kubelet/pods/c5ee6173-1d4d-47da-b5cb-e3d711df9826/volumes" Jan 27 14:33:03 crc kubenswrapper[4833]: I0127 14:33:03.314950 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:33:03 crc kubenswrapper[4833]: I0127 14:33:03.768561 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.260305 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.260604 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api-log" containerID="cri-o://b1a8396f658088cab73bfc6bf953fb7bb567049c58d162bb1b6534c0ab3a5b37" gracePeriod=30 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.260650 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api" containerID="cri-o://ec6c682cdca46ca81fcaf15731717ba8c6653c36692aa4f6d664417a07efca94" gracePeriod=30 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.508762 4833 generic.go:334] "Generic (PLEG): container finished" podID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerID="bb2885ee72f3d9527f85bbf5fbf469bcf45058d8227ea6e033e8e2bf2956395d" exitCode=137 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.509099 4833 generic.go:334] "Generic (PLEG): container finished" podID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerID="49b577c1fde8b5cbe8ecc68b5275570b3e97fba82792f204d3c4e448bf975eb4" exitCode=137 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.509150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7795d84f4f-bz4f2" event={"ID":"42b85b68-002e-4978-bc0c-311aa60f80fe","Type":"ContainerDied","Data":"bb2885ee72f3d9527f85bbf5fbf469bcf45058d8227ea6e033e8e2bf2956395d"} Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.509223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7795d84f4f-bz4f2" event={"ID":"42b85b68-002e-4978-bc0c-311aa60f80fe","Type":"ContainerDied","Data":"49b577c1fde8b5cbe8ecc68b5275570b3e97fba82792f204d3c4e448bf975eb4"} Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.525343 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerID="71c1c059727ffa2ea65f10c1821c6e6ffd3ed28cea15698be92ba762e5cefec0" exitCode=137 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.525378 4833 generic.go:334] "Generic (PLEG): container finished" podID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerID="5649838668c320f607064d2a42be88f546636ab9f5c01ac6f06dd56dacba639e" exitCode=137 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.525381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7544d4446c-rj8rn" event={"ID":"0a306119-60bc-4953-ba3a-3e9a6ff99959","Type":"ContainerDied","Data":"71c1c059727ffa2ea65f10c1821c6e6ffd3ed28cea15698be92ba762e5cefec0"} Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.525422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7544d4446c-rj8rn" event={"ID":"0a306119-60bc-4953-ba3a-3e9a6ff99959","Type":"ContainerDied","Data":"5649838668c320f607064d2a42be88f546636ab9f5c01ac6f06dd56dacba639e"} Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.529959 4833 generic.go:334] "Generic (PLEG): container finished" podID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerID="b1a8396f658088cab73bfc6bf953fb7bb567049c58d162bb1b6534c0ab3a5b37" exitCode=143 Jan 27 14:33:04 crc kubenswrapper[4833]: I0127 14:33:04.530000 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9caaeec5-a0d5-4c51-a290-cd283cc9497a","Type":"ContainerDied","Data":"b1a8396f658088cab73bfc6bf953fb7bb567049c58d162bb1b6534c0ab3a5b37"} Jan 27 14:33:06 crc kubenswrapper[4833]: I0127 14:33:06.446536 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-689fdb796b-5m2hw" Jan 27 14:33:06 crc kubenswrapper[4833]: I0127 14:33:06.543989 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c8b977f46-tpb5g"] Jan 27 14:33:06 crc kubenswrapper[4833]: I0127 14:33:06.544345 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c8b977f46-tpb5g" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api-log" containerID="cri-o://d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523" gracePeriod=30 Jan 27 14:33:06 crc kubenswrapper[4833]: I0127 14:33:06.544955 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c8b977f46-tpb5g" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api" containerID="cri-o://e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056" gracePeriod=30 Jan 27 14:33:06 crc kubenswrapper[4833]: I0127 14:33:06.632167 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.411320 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9322/\": read tcp 10.217.0.2:57238->10.217.0.174:9322: read: connection reset by peer" Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.411762 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.174:9322/\": read tcp 10.217.0.2:57226->10.217.0.174:9322: read: connection reset by peer" Jan 27 14:33:07 crc kubenswrapper[4833]: W0127 14:33:07.413276 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff624954_c6d6_4f1d_8b66_52573bddb34e.slice/crio-b0151c04bf04dde4c85320432f049237343925c1abcb972ffe074fbafff1512a WatchSource:0}: Error finding container b0151c04bf04dde4c85320432f049237343925c1abcb972ffe074fbafff1512a: Status 404 returned error can't find the container with id b0151c04bf04dde4c85320432f049237343925c1abcb972ffe074fbafff1512a Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.612175 4833 generic.go:334] "Generic (PLEG): container finished" podID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerID="ec6c682cdca46ca81fcaf15731717ba8c6653c36692aa4f6d664417a07efca94" exitCode=0 Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.612512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9caaeec5-a0d5-4c51-a290-cd283cc9497a","Type":"ContainerDied","Data":"ec6c682cdca46ca81fcaf15731717ba8c6653c36692aa4f6d664417a07efca94"} Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.615520 4833 generic.go:334] "Generic (PLEG): container finished" podID="796c7603-c98d-4dd0-b104-9197e5074655" containerID="d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523" exitCode=143 Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.615564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c8b977f46-tpb5g" event={"ID":"796c7603-c98d-4dd0-b104-9197e5074655","Type":"ContainerDied","Data":"d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523"} Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.616820 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff624954-c6d6-4f1d-8b66-52573bddb34e","Type":"ContainerStarted","Data":"b0151c04bf04dde4c85320432f049237343925c1abcb972ffe074fbafff1512a"} Jan 27 14:33:07 crc kubenswrapper[4833]: I0127 14:33:07.919213 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-4cgcq"] Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.120631 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.328700 4833 scope.go:117] "RemoveContainer" containerID="fede047154900321fd4690719fe4d8480a7448a91d0ec0acb85d113666f26a18" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.500014 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.626239 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" event={"ID":"5959a39f-6b69-4f81-9cb8-541268073335","Type":"ContainerStarted","Data":"88294e190f14e817d86ab67714ecef5f68110a5640a79bf808d31fb64ce4f5cc"} Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.628946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7795d84f4f-bz4f2" event={"ID":"42b85b68-002e-4978-bc0c-311aa60f80fe","Type":"ContainerDied","Data":"4d43bc9b157faa35acd5288410c31c96a730e8eaedeb8e5cfba18fbdabdbc5c9"} Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.628992 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d43bc9b157faa35acd5288410c31c96a730e8eaedeb8e5cfba18fbdabdbc5c9" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.631972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7544d4446c-rj8rn" event={"ID":"0a306119-60bc-4953-ba3a-3e9a6ff99959","Type":"ContainerDied","Data":"5faff59a7cf42ed6aa148daac795723653c84815eff132d1ee327a421b4baa1a"} Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.632000 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5faff59a7cf42ed6aa148daac795723653c84815eff132d1ee327a421b4baa1a" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.850373 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.865203 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.893834 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:33:08 crc kubenswrapper[4833]: I0127 14:33:08.903373 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000339 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9caaeec5-a0d5-4c51-a290-cd283cc9497a-logs\") pod \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000644 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-scripts\") pod \"0a306119-60bc-4953-ba3a-3e9a6ff99959\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000729 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-scripts\") pod \"42b85b68-002e-4978-bc0c-311aa60f80fe\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000775 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a306119-60bc-4953-ba3a-3e9a6ff99959-logs\") pod \"0a306119-60bc-4953-ba3a-3e9a6ff99959\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000800 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sblvv\" (UniqueName: \"kubernetes.io/projected/42b85b68-002e-4978-bc0c-311aa60f80fe-kube-api-access-sblvv\") pod \"42b85b68-002e-4978-bc0c-311aa60f80fe\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000844 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-combined-ca-bundle\") pod \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000900 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b85b68-002e-4978-bc0c-311aa60f80fe-logs\") pod \"42b85b68-002e-4978-bc0c-311aa60f80fe\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000930 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-custom-prometheus-ca\") pod \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000954 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-config-data\") pod \"0a306119-60bc-4953-ba3a-3e9a6ff99959\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.000979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/42b85b68-002e-4978-bc0c-311aa60f80fe-horizon-secret-key\") pod \"42b85b68-002e-4978-bc0c-311aa60f80fe\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.001021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9v69\" (UniqueName: \"kubernetes.io/projected/0a306119-60bc-4953-ba3a-3e9a6ff99959-kube-api-access-x9v69\") pod \"0a306119-60bc-4953-ba3a-3e9a6ff99959\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.001069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm78p\" (UniqueName: \"kubernetes.io/projected/9caaeec5-a0d5-4c51-a290-cd283cc9497a-kube-api-access-wm78p\") pod \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.001103 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-config-data\") pod \"42b85b68-002e-4978-bc0c-311aa60f80fe\" (UID: \"42b85b68-002e-4978-bc0c-311aa60f80fe\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.001133 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-config-data\") pod \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\" (UID: \"9caaeec5-a0d5-4c51-a290-cd283cc9497a\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.001158 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a306119-60bc-4953-ba3a-3e9a6ff99959-horizon-secret-key\") pod \"0a306119-60bc-4953-ba3a-3e9a6ff99959\" (UID: \"0a306119-60bc-4953-ba3a-3e9a6ff99959\") " Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.004697 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a306119-60bc-4953-ba3a-3e9a6ff99959-logs" (OuterVolumeSpecName: "logs") pod "0a306119-60bc-4953-ba3a-3e9a6ff99959" (UID: "0a306119-60bc-4953-ba3a-3e9a6ff99959"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.012116 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b85b68-002e-4978-bc0c-311aa60f80fe-logs" (OuterVolumeSpecName: "logs") pod "42b85b68-002e-4978-bc0c-311aa60f80fe" (UID: "42b85b68-002e-4978-bc0c-311aa60f80fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.018278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a306119-60bc-4953-ba3a-3e9a6ff99959-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0a306119-60bc-4953-ba3a-3e9a6ff99959" (UID: "0a306119-60bc-4953-ba3a-3e9a6ff99959"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.025797 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b85b68-002e-4978-bc0c-311aa60f80fe-kube-api-access-sblvv" (OuterVolumeSpecName: "kube-api-access-sblvv") pod "42b85b68-002e-4978-bc0c-311aa60f80fe" (UID: "42b85b68-002e-4978-bc0c-311aa60f80fe"). InnerVolumeSpecName "kube-api-access-sblvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.028066 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9caaeec5-a0d5-4c51-a290-cd283cc9497a-logs" (OuterVolumeSpecName: "logs") pod "9caaeec5-a0d5-4c51-a290-cd283cc9497a" (UID: "9caaeec5-a0d5-4c51-a290-cd283cc9497a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.042800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a306119-60bc-4953-ba3a-3e9a6ff99959-kube-api-access-x9v69" (OuterVolumeSpecName: "kube-api-access-x9v69") pod "0a306119-60bc-4953-ba3a-3e9a6ff99959" (UID: "0a306119-60bc-4953-ba3a-3e9a6ff99959"). InnerVolumeSpecName "kube-api-access-x9v69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.042843 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9caaeec5-a0d5-4c51-a290-cd283cc9497a-kube-api-access-wm78p" (OuterVolumeSpecName: "kube-api-access-wm78p") pod "9caaeec5-a0d5-4c51-a290-cd283cc9497a" (UID: "9caaeec5-a0d5-4c51-a290-cd283cc9497a"). InnerVolumeSpecName "kube-api-access-wm78p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.043228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b85b68-002e-4978-bc0c-311aa60f80fe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "42b85b68-002e-4978-bc0c-311aa60f80fe" (UID: "42b85b68-002e-4978-bc0c-311aa60f80fe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104095 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a306119-60bc-4953-ba3a-3e9a6ff99959-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104132 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sblvv\" (UniqueName: \"kubernetes.io/projected/42b85b68-002e-4978-bc0c-311aa60f80fe-kube-api-access-sblvv\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104173 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42b85b68-002e-4978-bc0c-311aa60f80fe-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104186 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/42b85b68-002e-4978-bc0c-311aa60f80fe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104197 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9v69\" (UniqueName: \"kubernetes.io/projected/0a306119-60bc-4953-ba3a-3e9a6ff99959-kube-api-access-x9v69\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104208 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm78p\" (UniqueName: \"kubernetes.io/projected/9caaeec5-a0d5-4c51-a290-cd283cc9497a-kube-api-access-wm78p\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104217 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0a306119-60bc-4953-ba3a-3e9a6ff99959-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.104255 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9caaeec5-a0d5-4c51-a290-cd283cc9497a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.155956 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9caaeec5-a0d5-4c51-a290-cd283cc9497a" (UID: "9caaeec5-a0d5-4c51-a290-cd283cc9497a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.168035 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-scripts" (OuterVolumeSpecName: "scripts") pod "42b85b68-002e-4978-bc0c-311aa60f80fe" (UID: "42b85b68-002e-4978-bc0c-311aa60f80fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.170409 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-scripts" (OuterVolumeSpecName: "scripts") pod "0a306119-60bc-4953-ba3a-3e9a6ff99959" (UID: "0a306119-60bc-4953-ba3a-3e9a6ff99959"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.187384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-config-data" (OuterVolumeSpecName: "config-data") pod "0a306119-60bc-4953-ba3a-3e9a6ff99959" (UID: "0a306119-60bc-4953-ba3a-3e9a6ff99959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.187961 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9caaeec5-a0d5-4c51-a290-cd283cc9497a" (UID: "9caaeec5-a0d5-4c51-a290-cd283cc9497a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.189592 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-config-data" (OuterVolumeSpecName: "config-data") pod "42b85b68-002e-4978-bc0c-311aa60f80fe" (UID: "42b85b68-002e-4978-bc0c-311aa60f80fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.206343 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.206384 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.206396 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.206409 4833 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.206420 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0a306119-60bc-4953-ba3a-3e9a6ff99959-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.206431 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42b85b68-002e-4978-bc0c-311aa60f80fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.223608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-config-data" (OuterVolumeSpecName: "config-data") pod "9caaeec5-a0d5-4c51-a290-cd283cc9497a" (UID: "9caaeec5-a0d5-4c51-a290-cd283cc9497a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.310814 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9caaeec5-a0d5-4c51-a290-cd283cc9497a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.644719 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff624954-c6d6-4f1d-8b66-52573bddb34e","Type":"ContainerStarted","Data":"722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.647302 4833 generic.go:334] "Generic (PLEG): container finished" podID="5959a39f-6b69-4f81-9cb8-541268073335" containerID="9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564" exitCode=0 Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.647348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" event={"ID":"5959a39f-6b69-4f81-9cb8-541268073335","Type":"ContainerDied","Data":"9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.656351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerStarted","Data":"ee67e2db6fdc89732a18699b4b29b04f084750876f3eedacf132e0475af93f4c"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.656542 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-central-agent" containerID="cri-o://6fb8d7492567691ac359c7e114c45a9d8a8c69ffb220260d95d5b78cf30eff46" gracePeriod=30 Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.656617 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.656653 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="proxy-httpd" containerID="cri-o://ee67e2db6fdc89732a18699b4b29b04f084750876f3eedacf132e0475af93f4c" gracePeriod=30 Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.656698 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-notification-agent" containerID="cri-o://a5a597e425bca6c647c9edc574b234b2e131cd0b46100565f0d6d9cf71113e04" gracePeriod=30 Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.656700 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="sg-core" containerID="cri-o://b862939c604b2b62cbc437e7a9e228f10942d870fe6751c9dc9519d134664b8e" gracePeriod=30 Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.718222 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.718819 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"9caaeec5-a0d5-4c51-a290-cd283cc9497a","Type":"ContainerDied","Data":"d09306b948a3dea406b170ebf4bf87308a06cf665822c68ba136f6fd7ba86402"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.718857 4833 scope.go:117] "RemoveContainer" containerID="ec6c682cdca46ca81fcaf15731717ba8c6653c36692aa4f6d664417a07efca94" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.737586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c3cd4a3-1c04-4917-ac38-9bc422a6af73","Type":"ContainerStarted","Data":"bd1e516d8536fcd70c8500245e8972c7903a27966d241ac001fdd756d7e0b607"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.739020 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.887380475 podStartE2EDuration="1m12.739002158s" podCreationTimestamp="2026-01-27 14:31:57 +0000 UTC" firstStartedPulling="2026-01-27 14:31:59.731560045 +0000 UTC m=+1221.382884447" lastFinishedPulling="2026-01-27 14:33:08.583181728 +0000 UTC m=+1290.234506130" observedRunningTime="2026-01-27 14:33:09.734564249 +0000 UTC m=+1291.385888661" watchObservedRunningTime="2026-01-27 14:33:09.739002158 +0000 UTC m=+1291.390326560" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.746968 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c8b977f46-tpb5g" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:56490->10.217.0.179:9311: read: connection reset by peer" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.746956 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6c8b977f46-tpb5g" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.179:9311/healthcheck\": read tcp 10.217.0.2:56496->10.217.0.179:9311: read: connection reset by peer" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.756875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-694b98c787-zwc44" event={"ID":"e6666db7-ad38-4773-bc1f-35667d8ea76b","Type":"ContainerStarted","Data":"27b3ce025f1e2fdecf9bf59b20dcae30da0f02f231e62cf25580caffb0574d13"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.756920 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-694b98c787-zwc44" event={"ID":"e6666db7-ad38-4773-bc1f-35667d8ea76b","Type":"ContainerStarted","Data":"fdb306778bdaff14c2bd9125bd344b2d07844656154fcdedc376742ba6b1af03"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.761789 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7544d4446c-rj8rn" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.761871 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" event={"ID":"7786a9e5-715f-4bd8-9eb9-4bd199e72b92","Type":"ContainerStarted","Data":"9c51cf8464a6f4d7665e30a4316f9e4feed5356057c9fdd844cd49abb0e88b47"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.761901 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" event={"ID":"7786a9e5-715f-4bd8-9eb9-4bd199e72b92","Type":"ContainerStarted","Data":"d072ef41e595e1ad567fc92556dbab351e6b8cea28fda7a6a0c7b9d575a18d86"} Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.761984 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7795d84f4f-bz4f2" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.768936 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.790756 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.812164 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-694b98c787-zwc44" podStartSLOduration=4.553669598 podStartE2EDuration="19.812147774s" podCreationTimestamp="2026-01-27 14:32:50 +0000 UTC" firstStartedPulling="2026-01-27 14:32:53.094102364 +0000 UTC m=+1274.745426786" lastFinishedPulling="2026-01-27 14:33:08.35258056 +0000 UTC m=+1290.003904962" observedRunningTime="2026-01-27 14:33:09.77396655 +0000 UTC m=+1291.425290952" watchObservedRunningTime="2026-01-27 14:33:09.812147774 +0000 UTC m=+1291.463472176" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.819799 4833 scope.go:117] "RemoveContainer" containerID="b1a8396f658088cab73bfc6bf953fb7bb567049c58d162bb1b6534c0ab3a5b37" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.826781 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827207 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827226 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon-log" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827238 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerName="dnsmasq-dns" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827244 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerName="dnsmasq-dns" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827255 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerName="init" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827261 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerName="init" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827271 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827278 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827299 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827305 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827317 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827323 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827333 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827339 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api-log" Jan 27 14:33:09 crc kubenswrapper[4833]: E0127 14:33:09.827355 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827360 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827541 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827557 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827567 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" containerName="horizon" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827578 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" containerName="horizon" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827586 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ee6173-1d4d-47da-b5cb-e3d711df9826" containerName="dnsmasq-dns" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827605 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.827613 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" containerName="watcher-api-log" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.833188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.836358 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.836643 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.838025 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.841958 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.847485 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86457ddd7d-rqrvt" podStartSLOduration=4.771080221 podStartE2EDuration="19.847428504s" podCreationTimestamp="2026-01-27 14:32:50 +0000 UTC" firstStartedPulling="2026-01-27 14:32:53.09037767 +0000 UTC m=+1274.741702072" lastFinishedPulling="2026-01-27 14:33:08.166725953 +0000 UTC m=+1289.818050355" observedRunningTime="2026-01-27 14:33:09.809887284 +0000 UTC m=+1291.461211696" watchObservedRunningTime="2026-01-27 14:33:09.847428504 +0000 UTC m=+1291.498752906" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.900059 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7795d84f4f-bz4f2"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.908509 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7795d84f4f-bz4f2"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.916507 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7544d4446c-rj8rn"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.924360 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7544d4446c-rj8rn"] Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.927392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.928715 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-config-data\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.928875 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.928917 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.928970 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ad57c5-23b4-4243-9700-f67937e1378d-logs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.929028 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:09 crc kubenswrapper[4833]: I0127 14:33:09.929136 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vp2\" (UniqueName: \"kubernetes.io/projected/68ad57c5-23b4-4243-9700-f67937e1378d-kube-api-access-47vp2\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.041901 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.042164 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.042530 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-config-data\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.044924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.044974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.045033 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ad57c5-23b4-4243-9700-f67937e1378d-logs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.045065 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.049506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68ad57c5-23b4-4243-9700-f67937e1378d-logs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.049834 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47vp2\" (UniqueName: \"kubernetes.io/projected/68ad57c5-23b4-4243-9700-f67937e1378d-kube-api-access-47vp2\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.050583 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.058309 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.059891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.061895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-config-data\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.064645 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68ad57c5-23b4-4243-9700-f67937e1378d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.073325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47vp2\" (UniqueName: \"kubernetes.io/projected/68ad57c5-23b4-4243-9700-f67937e1378d-kube-api-access-47vp2\") pod \"watcher-api-0\" (UID: \"68ad57c5-23b4-4243-9700-f67937e1378d\") " pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.176421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.317794 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.457142 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-combined-ca-bundle\") pod \"796c7603-c98d-4dd0-b104-9197e5074655\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.457243 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data-custom\") pod \"796c7603-c98d-4dd0-b104-9197e5074655\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.457267 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796c7603-c98d-4dd0-b104-9197e5074655-logs\") pod \"796c7603-c98d-4dd0-b104-9197e5074655\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.457343 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data\") pod \"796c7603-c98d-4dd0-b104-9197e5074655\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.457408 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qtzj\" (UniqueName: \"kubernetes.io/projected/796c7603-c98d-4dd0-b104-9197e5074655-kube-api-access-2qtzj\") pod \"796c7603-c98d-4dd0-b104-9197e5074655\" (UID: \"796c7603-c98d-4dd0-b104-9197e5074655\") " Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.461676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796c7603-c98d-4dd0-b104-9197e5074655-logs" (OuterVolumeSpecName: "logs") pod "796c7603-c98d-4dd0-b104-9197e5074655" (UID: "796c7603-c98d-4dd0-b104-9197e5074655"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.465963 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796c7603-c98d-4dd0-b104-9197e5074655-kube-api-access-2qtzj" (OuterVolumeSpecName: "kube-api-access-2qtzj") pod "796c7603-c98d-4dd0-b104-9197e5074655" (UID: "796c7603-c98d-4dd0-b104-9197e5074655"). InnerVolumeSpecName "kube-api-access-2qtzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.485241 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "796c7603-c98d-4dd0-b104-9197e5074655" (UID: "796c7603-c98d-4dd0-b104-9197e5074655"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.527740 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data" (OuterVolumeSpecName: "config-data") pod "796c7603-c98d-4dd0-b104-9197e5074655" (UID: "796c7603-c98d-4dd0-b104-9197e5074655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.543426 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "796c7603-c98d-4dd0-b104-9197e5074655" (UID: "796c7603-c98d-4dd0-b104-9197e5074655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.574027 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.574058 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796c7603-c98d-4dd0-b104-9197e5074655-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.574069 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.574077 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qtzj\" (UniqueName: \"kubernetes.io/projected/796c7603-c98d-4dd0-b104-9197e5074655-kube-api-access-2qtzj\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.574088 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796c7603-c98d-4dd0-b104-9197e5074655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.734495 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.783843 4833 generic.go:334] "Generic (PLEG): container finished" podID="796c7603-c98d-4dd0-b104-9197e5074655" containerID="e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056" exitCode=0 Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.783938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c8b977f46-tpb5g" event={"ID":"796c7603-c98d-4dd0-b104-9197e5074655","Type":"ContainerDied","Data":"e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.783966 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c8b977f46-tpb5g" event={"ID":"796c7603-c98d-4dd0-b104-9197e5074655","Type":"ContainerDied","Data":"209a5302094ca57e72f96b61dc124ffbc11f9cd75f3b15e26cbb9c10bc6675f5"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.783982 4833 scope.go:117] "RemoveContainer" containerID="e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.784105 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c8b977f46-tpb5g" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.790003 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff624954-c6d6-4f1d-8b66-52573bddb34e","Type":"ContainerStarted","Data":"5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae"} Jan 27 14:33:10 crc kubenswrapper[4833]: W0127 14:33:10.790146 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68ad57c5_23b4_4243_9700_f67937e1378d.slice/crio-49c6ed7ca58e7000820390ef9bda50cdcb9c52a164a130fd18292266537b4aef WatchSource:0}: Error finding container 49c6ed7ca58e7000820390ef9bda50cdcb9c52a164a130fd18292266537b4aef: Status 404 returned error can't find the container with id 49c6ed7ca58e7000820390ef9bda50cdcb9c52a164a130fd18292266537b4aef Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.791942 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" event={"ID":"5959a39f-6b69-4f81-9cb8-541268073335","Type":"ContainerStarted","Data":"99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.793064 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.807214 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=13.583052881 podStartE2EDuration="14.807193619s" podCreationTimestamp="2026-01-27 14:32:56 +0000 UTC" firstStartedPulling="2026-01-27 14:33:07.418774585 +0000 UTC m=+1289.070098987" lastFinishedPulling="2026-01-27 14:33:08.642915323 +0000 UTC m=+1290.294239725" observedRunningTime="2026-01-27 14:33:10.806733149 +0000 UTC m=+1292.458057551" watchObservedRunningTime="2026-01-27 14:33:10.807193619 +0000 UTC m=+1292.458518021" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.809044 4833 generic.go:334] "Generic (PLEG): container finished" podID="53234ba1-9a64-40ba-b483-26a9174669bd" containerID="ee67e2db6fdc89732a18699b4b29b04f084750876f3eedacf132e0475af93f4c" exitCode=0 Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.809071 4833 generic.go:334] "Generic (PLEG): container finished" podID="53234ba1-9a64-40ba-b483-26a9174669bd" containerID="b862939c604b2b62cbc437e7a9e228f10942d870fe6751c9dc9519d134664b8e" exitCode=2 Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.809079 4833 generic.go:334] "Generic (PLEG): container finished" podID="53234ba1-9a64-40ba-b483-26a9174669bd" containerID="6fb8d7492567691ac359c7e114c45a9d8a8c69ffb220260d95d5b78cf30eff46" exitCode=0 Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.809110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerDied","Data":"ee67e2db6fdc89732a18699b4b29b04f084750876f3eedacf132e0475af93f4c"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.809166 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerDied","Data":"b862939c604b2b62cbc437e7a9e228f10942d870fe6751c9dc9519d134664b8e"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.809180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerDied","Data":"6fb8d7492567691ac359c7e114c45a9d8a8c69ffb220260d95d5b78cf30eff46"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.823269 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cd9489696-52kzm" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.828374 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c3cd4a3-1c04-4917-ac38-9bc422a6af73","Type":"ContainerStarted","Data":"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43"} Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.832599 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" podStartSLOduration=14.832580447 podStartE2EDuration="14.832580447s" podCreationTimestamp="2026-01-27 14:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:10.827228227 +0000 UTC m=+1292.478552649" watchObservedRunningTime="2026-01-27 14:33:10.832580447 +0000 UTC m=+1292.483904849" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.887878 4833 scope.go:117] "RemoveContainer" containerID="d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.912505 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c8b977f46-tpb5g"] Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.936216 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c8b977f46-tpb5g"] Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.942376 4833 scope.go:117] "RemoveContainer" containerID="e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056" Jan 27 14:33:10 crc kubenswrapper[4833]: E0127 14:33:10.944107 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056\": container with ID starting with e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056 not found: ID does not exist" containerID="e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.944155 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056"} err="failed to get container status \"e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056\": rpc error: code = NotFound desc = could not find container \"e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056\": container with ID starting with e4fbc5264b84132321f576e2911cf1ee127bb4a44b9018c6b7e382100e084056 not found: ID does not exist" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.944184 4833 scope.go:117] "RemoveContainer" containerID="d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523" Jan 27 14:33:10 crc kubenswrapper[4833]: E0127 14:33:10.944586 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523\": container with ID starting with d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523 not found: ID does not exist" containerID="d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.944602 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523"} err="failed to get container status \"d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523\": rpc error: code = NotFound desc = could not find container \"d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523\": container with ID starting with d41747b9b47d5979e867190350629bf1162071da729b0081e90fb0655f9e4523 not found: ID does not exist" Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.948519 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54f64dd7dd-8w4dp"] Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.948766 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" containerID="cri-o://5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab" gracePeriod=30 Jan 27 14:33:10 crc kubenswrapper[4833]: I0127 14:33:10.948891 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon-log" containerID="cri-o://cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e" gracePeriod=30 Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.224775 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a306119-60bc-4953-ba3a-3e9a6ff99959" path="/var/lib/kubelet/pods/0a306119-60bc-4953-ba3a-3e9a6ff99959/volumes" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.226108 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b85b68-002e-4978-bc0c-311aa60f80fe" path="/var/lib/kubelet/pods/42b85b68-002e-4978-bc0c-311aa60f80fe/volumes" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.227915 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796c7603-c98d-4dd0-b104-9197e5074655" path="/var/lib/kubelet/pods/796c7603-c98d-4dd0-b104-9197e5074655/volumes" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.229015 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9caaeec5-a0d5-4c51-a290-cd283cc9497a" path="/var/lib/kubelet/pods/9caaeec5-a0d5-4c51-a290-cd283cc9497a/volumes" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.855889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"68ad57c5-23b4-4243-9700-f67937e1378d","Type":"ContainerStarted","Data":"a062213442f87990167126ee29cdbe3bff89b412c09f68fda8883180c5dcdc21"} Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.856297 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"68ad57c5-23b4-4243-9700-f67937e1378d","Type":"ContainerStarted","Data":"18b88e0444d1c13ebcbfbf8adc27f9f3f9bc0a693788a86aebb1aaa099121fea"} Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.856317 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"68ad57c5-23b4-4243-9700-f67937e1378d","Type":"ContainerStarted","Data":"49c6ed7ca58e7000820390ef9bda50cdcb9c52a164a130fd18292266537b4aef"} Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.858065 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.862813 4833 generic.go:334] "Generic (PLEG): container finished" podID="53234ba1-9a64-40ba-b483-26a9174669bd" containerID="a5a597e425bca6c647c9edc574b234b2e131cd0b46100565f0d6d9cf71113e04" exitCode=0 Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.862909 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerDied","Data":"a5a597e425bca6c647c9edc574b234b2e131cd0b46100565f0d6d9cf71113e04"} Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.871415 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c3cd4a3-1c04-4917-ac38-9bc422a6af73","Type":"ContainerStarted","Data":"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689"} Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.871691 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api-log" containerID="cri-o://b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43" gracePeriod=30 Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.871726 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api" containerID="cri-o://1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689" gracePeriod=30 Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.871901 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.891878 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.891857669 podStartE2EDuration="2.891857669s" podCreationTimestamp="2026-01-27 14:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:11.888496213 +0000 UTC m=+1293.539820615" watchObservedRunningTime="2026-01-27 14:33:11.891857669 +0000 UTC m=+1293.543182081" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.918131 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=15.918115055 podStartE2EDuration="15.918115055s" podCreationTimestamp="2026-01-27 14:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:11.911666522 +0000 UTC m=+1293.562990924" watchObservedRunningTime="2026-01-27 14:33:11.918115055 +0000 UTC m=+1293.569439457" Jan 27 14:33:11 crc kubenswrapper[4833]: I0127 14:33:11.928573 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.239658 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.367838 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.418296 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsxw5\" (UniqueName: \"kubernetes.io/projected/53234ba1-9a64-40ba-b483-26a9174669bd-kube-api-access-lsxw5\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.418410 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-log-httpd\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.419084 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-sg-core-conf-yaml\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.419161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-config-data\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.419268 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-scripts\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.419328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-combined-ca-bundle\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.419373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-run-httpd\") pod \"53234ba1-9a64-40ba-b483-26a9174669bd\" (UID: \"53234ba1-9a64-40ba-b483-26a9174669bd\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.420781 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.420881 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.425648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53234ba1-9a64-40ba-b483-26a9174669bd-kube-api-access-lsxw5" (OuterVolumeSpecName: "kube-api-access-lsxw5") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "kube-api-access-lsxw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.428056 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-scripts" (OuterVolumeSpecName: "scripts") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.467203 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.500309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.521946 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-combined-ca-bundle\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.522136 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-logs\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.522912 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data-custom\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.522782 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-logs" (OuterVolumeSpecName: "logs") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.523012 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-etc-machine-id\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.523314 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.523391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.523479 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-scripts\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.523660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c56w\" (UniqueName: \"kubernetes.io/projected/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-kube-api-access-4c56w\") pod \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\" (UID: \"7c3cd4a3-1c04-4917-ac38-9bc422a6af73\") " Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525006 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525029 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525042 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525055 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525086 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525098 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsxw5\" (UniqueName: \"kubernetes.io/projected/53234ba1-9a64-40ba-b483-26a9174669bd-kube-api-access-lsxw5\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525110 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53234ba1-9a64-40ba-b483-26a9174669bd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.525122 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.527378 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.530149 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-kube-api-access-4c56w" (OuterVolumeSpecName: "kube-api-access-4c56w") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "kube-api-access-4c56w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.531159 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-scripts" (OuterVolumeSpecName: "scripts") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.531784 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-config-data" (OuterVolumeSpecName: "config-data") pod "53234ba1-9a64-40ba-b483-26a9174669bd" (UID: "53234ba1-9a64-40ba-b483-26a9174669bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.549335 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.579840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data" (OuterVolumeSpecName: "config-data") pod "7c3cd4a3-1c04-4917-ac38-9bc422a6af73" (UID: "7c3cd4a3-1c04-4917-ac38-9bc422a6af73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.627542 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.627581 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.627595 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.627628 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c56w\" (UniqueName: \"kubernetes.io/projected/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-kube-api-access-4c56w\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.627646 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53234ba1-9a64-40ba-b483-26a9174669bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.627658 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c3cd4a3-1c04-4917-ac38-9bc422a6af73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.889231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53234ba1-9a64-40ba-b483-26a9174669bd","Type":"ContainerDied","Data":"25ffafadfbfe2dd95ef9e333a4c17c9aea0a169ae500378b3c153db5610bdf4c"} Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.889288 4833 scope.go:117] "RemoveContainer" containerID="ee67e2db6fdc89732a18699b4b29b04f084750876f3eedacf132e0475af93f4c" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.889432 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.900990 4833 generic.go:334] "Generic (PLEG): container finished" podID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerID="1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689" exitCode=0 Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.901025 4833 generic.go:334] "Generic (PLEG): container finished" podID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerID="b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43" exitCode=143 Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.901554 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c3cd4a3-1c04-4917-ac38-9bc422a6af73","Type":"ContainerDied","Data":"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689"} Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.901597 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c3cd4a3-1c04-4917-ac38-9bc422a6af73","Type":"ContainerDied","Data":"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43"} Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.901615 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7c3cd4a3-1c04-4917-ac38-9bc422a6af73","Type":"ContainerDied","Data":"bd1e516d8536fcd70c8500245e8972c7903a27966d241ac001fdd756d7e0b607"} Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.901669 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.958560 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.973435 4833 scope.go:117] "RemoveContainer" containerID="b862939c604b2b62cbc437e7a9e228f10942d870fe6751c9dc9519d134664b8e" Jan 27 14:33:12 crc kubenswrapper[4833]: I0127 14:33:12.983389 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.015741 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.036053 4833 scope.go:117] "RemoveContainer" containerID="a5a597e425bca6c647c9edc574b234b2e131cd0b46100565f0d6d9cf71113e04" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.041026 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.062361 4833 scope.go:117] "RemoveContainer" containerID="6fb8d7492567691ac359c7e114c45a9d8a8c69ffb220260d95d5b78cf30eff46" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063215 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063766 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-notification-agent" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063779 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-notification-agent" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063790 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063795 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063808 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-central-agent" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063814 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-central-agent" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063821 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="proxy-httpd" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063827 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="proxy-httpd" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063837 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api-log" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063843 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api-log" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063870 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api-log" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063875 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api-log" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063889 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="sg-core" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063895 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="sg-core" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.063909 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.063914 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064071 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api-log" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064086 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" containerName="cinder-api" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064101 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="sg-core" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064111 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064121 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="proxy-httpd" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064129 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-central-agent" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064140 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="796c7603-c98d-4dd0-b104-9197e5074655" containerName="barbican-api-log" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.064151 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" containerName="ceilometer-notification-agent" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.066248 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.069058 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.072633 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.072954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.074174 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.078378 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.078421 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.078382 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.086061 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.090600 4833 scope.go:117] "RemoveContainer" containerID="1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.099539 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.114519 4833 scope.go:117] "RemoveContainer" containerID="b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.134187 4833 scope.go:117] "RemoveContainer" containerID="1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.134578 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689\": container with ID starting with 1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689 not found: ID does not exist" containerID="1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.134617 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689"} err="failed to get container status \"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689\": rpc error: code = NotFound desc = could not find container \"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689\": container with ID starting with 1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689 not found: ID does not exist" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.134644 4833 scope.go:117] "RemoveContainer" containerID="b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43" Jan 27 14:33:13 crc kubenswrapper[4833]: E0127 14:33:13.134987 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43\": container with ID starting with b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43 not found: ID does not exist" containerID="b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.135034 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43"} err="failed to get container status \"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43\": rpc error: code = NotFound desc = could not find container \"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43\": container with ID starting with b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43 not found: ID does not exist" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.135165 4833 scope.go:117] "RemoveContainer" containerID="1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.135477 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689"} err="failed to get container status \"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689\": rpc error: code = NotFound desc = could not find container \"1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689\": container with ID starting with 1e45d7c49dcb888b02ea3fe834b085558c8d1394dd4ab442fa789c15c164a689 not found: ID does not exist" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.135572 4833 scope.go:117] "RemoveContainer" containerID="b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.135899 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43"} err="failed to get container status \"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43\": rpc error: code = NotFound desc = could not find container \"b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43\": container with ID starting with b1c1ccbf9c47053030cdab6dc9b21d420ec5fe612f034786f96f0eecef08ec43 not found: ID does not exist" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.220298 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53234ba1-9a64-40ba-b483-26a9174669bd" path="/var/lib/kubelet/pods/53234ba1-9a64-40ba-b483-26a9174669bd/volumes" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.221292 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3cd4a3-1c04-4917-ac38-9bc422a6af73" path="/var/lib/kubelet/pods/7c3cd4a3-1c04-4917-ac38-9bc422a6af73/volumes" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.241937 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.241999 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242046 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-scripts\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242083 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242155 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-logs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242202 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242245 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-scripts\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242341 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-config-data\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-config-data\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242418 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6htp\" (UniqueName: \"kubernetes.io/projected/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-kube-api-access-l6htp\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242445 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sw72\" (UniqueName: \"kubernetes.io/projected/c6190735-a3c8-46a0-9126-72ea1f36db34-kube-api-access-5sw72\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242499 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242525 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242564 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-log-httpd\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.242589 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-run-httpd\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sw72\" (UniqueName: \"kubernetes.io/projected/c6190735-a3c8-46a0-9126-72ea1f36db34-kube-api-access-5sw72\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-log-httpd\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-run-httpd\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-scripts\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344544 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-logs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344658 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344681 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-scripts\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-config-data\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344849 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-config-data\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6htp\" (UniqueName: \"kubernetes.io/projected/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-kube-api-access-l6htp\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.344969 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-log-httpd\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.345041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.346734 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-run-httpd\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.348518 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.348624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-logs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.348817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-scripts\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.349869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.350027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.350750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-config-data\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.351035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.351097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.351421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-config-data\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.352171 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-scripts\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.353926 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.360546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sw72\" (UniqueName: \"kubernetes.io/projected/c6190735-a3c8-46a0-9126-72ea1f36db34-kube-api-access-5sw72\") pod \"ceilometer-0\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.367180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6htp\" (UniqueName: \"kubernetes.io/projected/c2877c4a-1ce0-44f7-9c27-fe1819344f2e-kube-api-access-l6htp\") pod \"cinder-api-0\" (UID: \"c2877c4a-1ce0-44f7-9c27-fe1819344f2e\") " pod="openstack/cinder-api-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.396490 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:13 crc kubenswrapper[4833]: I0127 14:33:13.409578 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:33:14 crc kubenswrapper[4833]: I0127 14:33:14.126246 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:33:14 crc kubenswrapper[4833]: W0127 14:33:14.578230 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6190735_a3c8_46a0_9126_72ea1f36db34.slice/crio-40051371d073576ddfb3a971e2096eef516fbae9ebac8e6e68d98869a117263f WatchSource:0}: Error finding container 40051371d073576ddfb3a971e2096eef516fbae9ebac8e6e68d98869a117263f: Status 404 returned error can't find the container with id 40051371d073576ddfb3a971e2096eef516fbae9ebac8e6e68d98869a117263f Jan 27 14:33:14 crc kubenswrapper[4833]: I0127 14:33:14.583736 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:14 crc kubenswrapper[4833]: W0127 14:33:14.652528 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2877c4a_1ce0_44f7_9c27_fe1819344f2e.slice/crio-d7f8716d277b151b6f916ea510e720258b5cd81ddc7b50f4c10568f7d2e8a5d0 WatchSource:0}: Error finding container d7f8716d277b151b6f916ea510e720258b5cd81ddc7b50f4c10568f7d2e8a5d0: Status 404 returned error can't find the container with id d7f8716d277b151b6f916ea510e720258b5cd81ddc7b50f4c10568f7d2e8a5d0 Jan 27 14:33:14 crc kubenswrapper[4833]: I0127 14:33:14.653719 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.140566 4833 generic.go:334] "Generic (PLEG): container finished" podID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerID="5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab" exitCode=0 Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.140612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f64dd7dd-8w4dp" event={"ID":"17c47588-5dcf-4028-b0f7-b650ab0d4f4e","Type":"ContainerDied","Data":"5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab"} Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.142353 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c2877c4a-1ce0-44f7-9c27-fe1819344f2e","Type":"ContainerStarted","Data":"d7f8716d277b151b6f916ea510e720258b5cd81ddc7b50f4c10568f7d2e8a5d0"} Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.143931 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerStarted","Data":"40051371d073576ddfb3a971e2096eef516fbae9ebac8e6e68d98869a117263f"} Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.177719 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.178162 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:33:15 crc kubenswrapper[4833]: I0127 14:33:15.471640 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.145065 4833 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda324f832-7082-443a-87c7-3cef46ebe7ea"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda324f832-7082-443a-87c7-3cef46ebe7ea] : Timed out while waiting for systemd to remove kubepods-besteffort-poda324f832_7082_443a_87c7_3cef46ebe7ea.slice" Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.160582 4833 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5a57c811-cef6-458c-bb52-ef9e0861e39a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5a57c811-cef6-458c-bb52-ef9e0861e39a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5a57c811_cef6_458c_bb52_ef9e0861e39a.slice" Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.171288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c2877c4a-1ce0-44f7-9c27-fe1819344f2e","Type":"ContainerStarted","Data":"1cbb4b8e1ba4968496c376b7881222b10a9c62f6d3af978aac9b83da91268fdb"} Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.173502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerStarted","Data":"37bc54429fefde564fa395911ac41ea155fd9d725634309d0a5da7db2c516364"} Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.187979 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.534967 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bdcd59f97-6hlst" Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.619596 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8684fb6757-ql55d"] Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.619822 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8684fb6757-ql55d" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-api" containerID="cri-o://021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082" gracePeriod=30 Jan 27 14:33:16 crc kubenswrapper[4833]: I0127 14:33:16.620268 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8684fb6757-ql55d" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-httpd" containerID="cri-o://9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127" gracePeriod=30 Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.160082 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.199786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerStarted","Data":"d59263671635fc23889d2320fe953afdbf5ff885db52eac54bf471df725bccfb"} Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.204589 4833 generic.go:334] "Generic (PLEG): container finished" podID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerID="9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127" exitCode=0 Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.204681 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8684fb6757-ql55d" event={"ID":"d622569d-4961-40a5-8bfc-0f08e9ed8b82","Type":"ContainerDied","Data":"9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127"} Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.229600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c2877c4a-1ce0-44f7-9c27-fe1819344f2e","Type":"ContainerStarted","Data":"2ebbb472a6651ab18e568ef900353ce6200fea6776805364a5509e679dcd7f2e"} Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.229646 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.232589 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.232799 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="cinder-scheduler" containerID="cri-o://722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630" gracePeriod=30 Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.232939 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="probe" containerID="cri-o://5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae" gracePeriod=30 Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.267242 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.267224712 podStartE2EDuration="5.267224712s" podCreationTimestamp="2026-01-27 14:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:17.244979814 +0000 UTC m=+1298.896304216" watchObservedRunningTime="2026-01-27 14:33:17.267224712 +0000 UTC m=+1298.918549104" Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.314493 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.412741 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2nbmb"] Jan 27 14:33:17 crc kubenswrapper[4833]: I0127 14:33:17.412967 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="dnsmasq-dns" containerID="cri-o://6a59a8400d9cf95ef6a90835c4e68cf530b5b6a6b6b404bd2f9aec4109b0546c" gracePeriod=10 Jan 27 14:33:18 crc kubenswrapper[4833]: I0127 14:33:18.235730 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerStarted","Data":"811a56266c12fd0dfc50bb6f7112011b47fc5e7d0f6f51276f9d759fcb1c09b0"} Jan 27 14:33:18 crc kubenswrapper[4833]: I0127 14:33:18.239187 4833 generic.go:334] "Generic (PLEG): container finished" podID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerID="6a59a8400d9cf95ef6a90835c4e68cf530b5b6a6b6b404bd2f9aec4109b0546c" exitCode=0 Jan 27 14:33:18 crc kubenswrapper[4833]: I0127 14:33:18.240020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" event={"ID":"bb675f1d-22c9-4f48-a415-e6a5fc15f357","Type":"ContainerDied","Data":"6a59a8400d9cf95ef6a90835c4e68cf530b5b6a6b6b404bd2f9aec4109b0546c"} Jan 27 14:33:18 crc kubenswrapper[4833]: I0127 14:33:18.714029 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4833]: E0127 14:33:18.835827 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff624954_c6d6_4f1d_8b66_52573bddb34e.slice/crio-conmon-5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:33:18 crc kubenswrapper[4833]: I0127 14:33:18.883922 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:33:18 crc kubenswrapper[4833]: I0127 14:33:18.885617 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cd7556dcb-c7h4r" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.143103 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.182499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb5l\" (UniqueName: \"kubernetes.io/projected/bb675f1d-22c9-4f48-a415-e6a5fc15f357-kube-api-access-fpb5l\") pod \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.182986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-config\") pod \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.183231 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-svc\") pod \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.183346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-nb\") pod \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.183478 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-sb\") pod \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.183570 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-swift-storage-0\") pod \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\" (UID: \"bb675f1d-22c9-4f48-a415-e6a5fc15f357\") " Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.204313 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb675f1d-22c9-4f48-a415-e6a5fc15f357-kube-api-access-fpb5l" (OuterVolumeSpecName: "kube-api-access-fpb5l") pod "bb675f1d-22c9-4f48-a415-e6a5fc15f357" (UID: "bb675f1d-22c9-4f48-a415-e6a5fc15f357"). InnerVolumeSpecName "kube-api-access-fpb5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.298523 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb675f1d-22c9-4f48-a415-e6a5fc15f357" (UID: "bb675f1d-22c9-4f48-a415-e6a5fc15f357"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.305316 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.305357 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb5l\" (UniqueName: \"kubernetes.io/projected/bb675f1d-22c9-4f48-a415-e6a5fc15f357-kube-api-access-fpb5l\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.306061 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.309075 4833 generic.go:334] "Generic (PLEG): container finished" podID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerID="5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae" exitCode=0 Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.324986 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bb675f1d-22c9-4f48-a415-e6a5fc15f357" (UID: "bb675f1d-22c9-4f48-a415-e6a5fc15f357"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.331978 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb675f1d-22c9-4f48-a415-e6a5fc15f357" (UID: "bb675f1d-22c9-4f48-a415-e6a5fc15f357"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.337731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-config" (OuterVolumeSpecName: "config") pod "bb675f1d-22c9-4f48-a415-e6a5fc15f357" (UID: "bb675f1d-22c9-4f48-a415-e6a5fc15f357"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.340053 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb675f1d-22c9-4f48-a415-e6a5fc15f357" (UID: "bb675f1d-22c9-4f48-a415-e6a5fc15f357"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.378828 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-2nbmb" event={"ID":"bb675f1d-22c9-4f48-a415-e6a5fc15f357","Type":"ContainerDied","Data":"6ea2167e7fb1ff2bb7968b8bacd4779478d9741a13f72f560284a9037fbb9ea9"} Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.378906 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff624954-c6d6-4f1d-8b66-52573bddb34e","Type":"ContainerDied","Data":"5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae"} Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.378970 4833 scope.go:117] "RemoveContainer" containerID="6a59a8400d9cf95ef6a90835c4e68cf530b5b6a6b6b404bd2f9aec4109b0546c" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.408004 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.408062 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.408075 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.408090 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb675f1d-22c9-4f48-a415-e6a5fc15f357-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.411337 4833 scope.go:117] "RemoveContainer" containerID="eaf8b553436a80f57df8d411e5c4a1eb8a3f5204de56414425a0cb50d553de6e" Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.641428 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2nbmb"] Jan 27 14:33:19 crc kubenswrapper[4833]: I0127 14:33:19.649864 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-2nbmb"] Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.176923 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.196187 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.323369 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerStarted","Data":"383d2f2ff79abb76a13e58f93acdbbfd0b62b259e251d55c1633dacb41f22a02"} Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.359289 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.680813511 podStartE2EDuration="8.359242557s" podCreationTimestamp="2026-01-27 14:33:12 +0000 UTC" firstStartedPulling="2026-01-27 14:33:14.580498911 +0000 UTC m=+1296.231823303" lastFinishedPulling="2026-01-27 14:33:19.258927947 +0000 UTC m=+1300.910252349" observedRunningTime="2026-01-27 14:33:20.347536685 +0000 UTC m=+1301.998861097" watchObservedRunningTime="2026-01-27 14:33:20.359242557 +0000 UTC m=+1302.010566959" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.363921 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.391220 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-68475b756f-rzm75" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.806514 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.842763 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-config\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.842880 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kftwq\" (UniqueName: \"kubernetes.io/projected/d622569d-4961-40a5-8bfc-0f08e9ed8b82-kube-api-access-kftwq\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.842915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-ovndb-tls-certs\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.842947 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-public-tls-certs\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.843042 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-httpd-config\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.843072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-internal-tls-certs\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.843224 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-combined-ca-bundle\") pod \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\" (UID: \"d622569d-4961-40a5-8bfc-0f08e9ed8b82\") " Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.848483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.850172 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d622569d-4961-40a5-8bfc-0f08e9ed8b82-kube-api-access-kftwq" (OuterVolumeSpecName: "kube-api-access-kftwq") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "kube-api-access-kftwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.914434 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.940783 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-config" (OuterVolumeSpecName: "config") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.940935 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.941821 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.946043 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.946074 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.946110 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.946121 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.946129 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kftwq\" (UniqueName: \"kubernetes.io/projected/d622569d-4961-40a5-8bfc-0f08e9ed8b82-kube-api-access-kftwq\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.946140 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:20 crc kubenswrapper[4833]: I0127 14:33:20.955972 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d622569d-4961-40a5-8bfc-0f08e9ed8b82" (UID: "d622569d-4961-40a5-8bfc-0f08e9ed8b82"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.052282 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d622569d-4961-40a5-8bfc-0f08e9ed8b82-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.146952 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.222245 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" path="/var/lib/kubelet/pods/bb675f1d-22c9-4f48-a415-e6a5fc15f357/volumes" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff624954-c6d6-4f1d-8b66-52573bddb34e-etc-machine-id\") pod \"ff624954-c6d6-4f1d-8b66-52573bddb34e\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255701 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data\") pod \"ff624954-c6d6-4f1d-8b66-52573bddb34e\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255757 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dqcv\" (UniqueName: \"kubernetes.io/projected/ff624954-c6d6-4f1d-8b66-52573bddb34e-kube-api-access-8dqcv\") pod \"ff624954-c6d6-4f1d-8b66-52573bddb34e\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff624954-c6d6-4f1d-8b66-52573bddb34e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff624954-c6d6-4f1d-8b66-52573bddb34e" (UID: "ff624954-c6d6-4f1d-8b66-52573bddb34e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data-custom\") pod \"ff624954-c6d6-4f1d-8b66-52573bddb34e\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255863 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-combined-ca-bundle\") pod \"ff624954-c6d6-4f1d-8b66-52573bddb34e\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.255931 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-scripts\") pod \"ff624954-c6d6-4f1d-8b66-52573bddb34e\" (UID: \"ff624954-c6d6-4f1d-8b66-52573bddb34e\") " Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.256541 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff624954-c6d6-4f1d-8b66-52573bddb34e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.270760 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff624954-c6d6-4f1d-8b66-52573bddb34e-kube-api-access-8dqcv" (OuterVolumeSpecName: "kube-api-access-8dqcv") pod "ff624954-c6d6-4f1d-8b66-52573bddb34e" (UID: "ff624954-c6d6-4f1d-8b66-52573bddb34e"). InnerVolumeSpecName "kube-api-access-8dqcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.274930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff624954-c6d6-4f1d-8b66-52573bddb34e" (UID: "ff624954-c6d6-4f1d-8b66-52573bddb34e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.292928 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-scripts" (OuterVolumeSpecName: "scripts") pod "ff624954-c6d6-4f1d-8b66-52573bddb34e" (UID: "ff624954-c6d6-4f1d-8b66-52573bddb34e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.312592 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff624954-c6d6-4f1d-8b66-52573bddb34e" (UID: "ff624954-c6d6-4f1d-8b66-52573bddb34e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.333798 4833 generic.go:334] "Generic (PLEG): container finished" podID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerID="722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630" exitCode=0 Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.333855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff624954-c6d6-4f1d-8b66-52573bddb34e","Type":"ContainerDied","Data":"722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630"} Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.333880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff624954-c6d6-4f1d-8b66-52573bddb34e","Type":"ContainerDied","Data":"b0151c04bf04dde4c85320432f049237343925c1abcb972ffe074fbafff1512a"} Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.333897 4833 scope.go:117] "RemoveContainer" containerID="5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.333978 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.336589 4833 generic.go:334] "Generic (PLEG): container finished" podID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerID="021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082" exitCode=0 Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.337392 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8684fb6757-ql55d" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.337838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8684fb6757-ql55d" event={"ID":"d622569d-4961-40a5-8bfc-0f08e9ed8b82","Type":"ContainerDied","Data":"021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082"} Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.337865 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.337876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8684fb6757-ql55d" event={"ID":"d622569d-4961-40a5-8bfc-0f08e9ed8b82","Type":"ContainerDied","Data":"d064bfb71e3964ebd719b533717ed5a221970616d9e2e33b7345f801b5b99f6f"} Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.358899 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dqcv\" (UniqueName: \"kubernetes.io/projected/ff624954-c6d6-4f1d-8b66-52573bddb34e-kube-api-access-8dqcv\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.358933 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.358948 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.358960 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.385435 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data" (OuterVolumeSpecName: "config-data") pod "ff624954-c6d6-4f1d-8b66-52573bddb34e" (UID: "ff624954-c6d6-4f1d-8b66-52573bddb34e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.435333 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8684fb6757-ql55d"] Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.437789 4833 scope.go:117] "RemoveContainer" containerID="722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.447961 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8684fb6757-ql55d"] Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.460982 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff624954-c6d6-4f1d-8b66-52573bddb34e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.462486 4833 scope.go:117] "RemoveContainer" containerID="5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.462964 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae\": container with ID starting with 5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae not found: ID does not exist" containerID="5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.463004 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae"} err="failed to get container status \"5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae\": rpc error: code = NotFound desc = could not find container \"5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae\": container with ID starting with 5abdf3ee77f6ddca68930869310555db116185429cbea5cfb842b8fb5f2783ae not found: ID does not exist" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.463035 4833 scope.go:117] "RemoveContainer" containerID="722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.463439 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630\": container with ID starting with 722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630 not found: ID does not exist" containerID="722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.463485 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630"} err="failed to get container status \"722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630\": rpc error: code = NotFound desc = could not find container \"722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630\": container with ID starting with 722bcbb427a7655eeef5af7c73beaf04793881f4e271a360a062ff81024e1630 not found: ID does not exist" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.463504 4833 scope.go:117] "RemoveContainer" containerID="9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.482542 4833 scope.go:117] "RemoveContainer" containerID="021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.499565 4833 scope.go:117] "RemoveContainer" containerID="9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.500155 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127\": container with ID starting with 9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127 not found: ID does not exist" containerID="9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.500196 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127"} err="failed to get container status \"9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127\": rpc error: code = NotFound desc = could not find container \"9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127\": container with ID starting with 9ec333ea108c975413cd440d3ea214bcc172027d3e779cd9554b69bf81ab1127 not found: ID does not exist" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.500223 4833 scope.go:117] "RemoveContainer" containerID="021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.500533 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082\": container with ID starting with 021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082 not found: ID does not exist" containerID="021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.500563 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082"} err="failed to get container status \"021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082\": rpc error: code = NotFound desc = could not find container \"021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082\": container with ID starting with 021d6122628fc69cd1517f826189931132e1bf01acf634e147f21e39bf090082 not found: ID does not exist" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.671524 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.682645 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.695510 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.695983 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="init" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696005 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="init" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.696023 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="probe" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696033 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="probe" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.696056 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="cinder-scheduler" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696065 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="cinder-scheduler" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.696092 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="dnsmasq-dns" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696100 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="dnsmasq-dns" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.696120 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-api" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696129 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-api" Jan 27 14:33:21 crc kubenswrapper[4833]: E0127 14:33:21.696144 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-httpd" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696153 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-httpd" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696401 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-api" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696425 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="probe" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696440 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" containerName="neutron-httpd" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696485 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" containerName="cinder-scheduler" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.696511 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb675f1d-22c9-4f48-a415-e6a5fc15f357" containerName="dnsmasq-dns" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.698342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.713753 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.719312 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.871421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.871489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-config-data\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.871540 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afc7077f-8140-4b04-bcad-c7553dc1ca64-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.871614 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.871641 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-scripts\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.871682 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzw4t\" (UniqueName: \"kubernetes.io/projected/afc7077f-8140-4b04-bcad-c7553dc1ca64-kube-api-access-nzw4t\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.973428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afc7077f-8140-4b04-bcad-c7553dc1ca64-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.973550 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.973583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-scripts\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.973632 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzw4t\" (UniqueName: \"kubernetes.io/projected/afc7077f-8140-4b04-bcad-c7553dc1ca64-kube-api-access-nzw4t\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.973669 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.973690 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-config-data\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.974470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afc7077f-8140-4b04-bcad-c7553dc1ca64-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.978306 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.978724 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-scripts\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.979021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.979528 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afc7077f-8140-4b04-bcad-c7553dc1ca64-config-data\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:21 crc kubenswrapper[4833]: I0127 14:33:21.997234 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzw4t\" (UniqueName: \"kubernetes.io/projected/afc7077f-8140-4b04-bcad-c7553dc1ca64-kube-api-access-nzw4t\") pod \"cinder-scheduler-0\" (UID: \"afc7077f-8140-4b04-bcad-c7553dc1ca64\") " pod="openstack/cinder-scheduler-0" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.019693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.477486 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.698685 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.700127 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.702260 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.702473 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.702662 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dj99r" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.713168 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.906161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt859\" (UniqueName: \"kubernetes.io/projected/6a29332d-3473-40f9-950e-9cd9249a37aa-kube-api-access-zt859\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.906214 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a29332d-3473-40f9-950e-9cd9249a37aa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.906291 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a29332d-3473-40f9-950e-9cd9249a37aa-openstack-config-secret\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:22 crc kubenswrapper[4833]: I0127 14:33:22.906507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a29332d-3473-40f9-950e-9cd9249a37aa-openstack-config\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.008168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a29332d-3473-40f9-950e-9cd9249a37aa-openstack-config\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.008253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt859\" (UniqueName: \"kubernetes.io/projected/6a29332d-3473-40f9-950e-9cd9249a37aa-kube-api-access-zt859\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.008279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a29332d-3473-40f9-950e-9cd9249a37aa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.008307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a29332d-3473-40f9-950e-9cd9249a37aa-openstack-config-secret\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.009190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6a29332d-3473-40f9-950e-9cd9249a37aa-openstack-config\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.013162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a29332d-3473-40f9-950e-9cd9249a37aa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.013386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6a29332d-3473-40f9-950e-9cd9249a37aa-openstack-config-secret\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.033117 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt859\" (UniqueName: \"kubernetes.io/projected/6a29332d-3473-40f9-950e-9cd9249a37aa-kube-api-access-zt859\") pod \"openstackclient\" (UID: \"6a29332d-3473-40f9-950e-9cd9249a37aa\") " pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.059105 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.227611 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d622569d-4961-40a5-8bfc-0f08e9ed8b82" path="/var/lib/kubelet/pods/d622569d-4961-40a5-8bfc-0f08e9ed8b82/volumes" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.228687 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff624954-c6d6-4f1d-8b66-52573bddb34e" path="/var/lib/kubelet/pods/ff624954-c6d6-4f1d-8b66-52573bddb34e/volumes" Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.366863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"afc7077f-8140-4b04-bcad-c7553dc1ca64","Type":"ContainerStarted","Data":"767829cc2aa72eecde554dd69597b61fc820cf7b04ff50b53a20c9f0d42bf59b"} Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.367199 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"afc7077f-8140-4b04-bcad-c7553dc1ca64","Type":"ContainerStarted","Data":"f95783a51badbe9d12b17a1c4ec4386cbfaec8f83fcfb719ecc086145a94ba51"} Jan 27 14:33:23 crc kubenswrapper[4833]: I0127 14:33:23.551143 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:33:23 crc kubenswrapper[4833]: W0127 14:33:23.562350 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a29332d_3473_40f9_950e_9cd9249a37aa.slice/crio-08ec5b7744021bb8bba125b1aafedbd0ff779ac591baf2bfeeea4fb22372761f WatchSource:0}: Error finding container 08ec5b7744021bb8bba125b1aafedbd0ff779ac591baf2bfeeea4fb22372761f: Status 404 returned error can't find the container with id 08ec5b7744021bb8bba125b1aafedbd0ff779ac591baf2bfeeea4fb22372761f Jan 27 14:33:24 crc kubenswrapper[4833]: I0127 14:33:24.379435 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"afc7077f-8140-4b04-bcad-c7553dc1ca64","Type":"ContainerStarted","Data":"5e8ca3b1b3e59631e7d1d435d9aa9d481a78ac98a03833ede9da5ca3332f991a"} Jan 27 14:33:24 crc kubenswrapper[4833]: I0127 14:33:24.383345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6a29332d-3473-40f9-950e-9cd9249a37aa","Type":"ContainerStarted","Data":"08ec5b7744021bb8bba125b1aafedbd0ff779ac591baf2bfeeea4fb22372761f"} Jan 27 14:33:24 crc kubenswrapper[4833]: I0127 14:33:24.404569 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.404548872 podStartE2EDuration="3.404548872s" podCreationTimestamp="2026-01-27 14:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:24.400585435 +0000 UTC m=+1306.051909847" watchObservedRunningTime="2026-01-27 14:33:24.404548872 +0000 UTC m=+1306.055873274" Jan 27 14:33:25 crc kubenswrapper[4833]: I0127 14:33:25.422025 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 14:33:26 crc kubenswrapper[4833]: I0127 14:33:26.188686 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 27 14:33:27 crc kubenswrapper[4833]: I0127 14:33:27.020615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:33:27 crc kubenswrapper[4833]: I0127 14:33:27.922483 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:33:27 crc kubenswrapper[4833]: I0127 14:33:27.923290 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" containerName="watcher-decision-engine" containerID="cri-o://586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571" gracePeriod=30 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.177971 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.178594 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-central-agent" containerID="cri-o://37bc54429fefde564fa395911ac41ea155fd9d725634309d0a5da7db2c516364" gracePeriod=30 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.179791 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-notification-agent" containerID="cri-o://d59263671635fc23889d2320fe953afdbf5ff885db52eac54bf471df725bccfb" gracePeriod=30 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.179832 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="proxy-httpd" containerID="cri-o://383d2f2ff79abb76a13e58f93acdbbfd0b62b259e251d55c1633dacb41f22a02" gracePeriod=30 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.179809 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="sg-core" containerID="cri-o://811a56266c12fd0dfc50bb6f7112011b47fc5e7d0f6f51276f9d759fcb1c09b0" gracePeriod=30 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.186966 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.436890 4833 generic.go:334] "Generic (PLEG): container finished" podID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerID="383d2f2ff79abb76a13e58f93acdbbfd0b62b259e251d55c1633dacb41f22a02" exitCode=0 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.436927 4833 generic.go:334] "Generic (PLEG): container finished" podID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerID="811a56266c12fd0dfc50bb6f7112011b47fc5e7d0f6f51276f9d759fcb1c09b0" exitCode=2 Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.436950 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerDied","Data":"383d2f2ff79abb76a13e58f93acdbbfd0b62b259e251d55c1633dacb41f22a02"} Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.437007 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerDied","Data":"811a56266c12fd0dfc50bb6f7112011b47fc5e7d0f6f51276f9d759fcb1c09b0"} Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.957793 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d4ff448c-rqtwt"] Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.959663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.971094 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d4ff448c-rqtwt"] Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.975903 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.975938 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:33:29 crc kubenswrapper[4833]: I0127 14:33:29.976666 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154148 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-combined-ca-bundle\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae07283-7914-42d0-be9e-93d61eb88267-log-httpd\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154207 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-public-tls-certs\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154336 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae07283-7914-42d0-be9e-93d61eb88267-run-httpd\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154408 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-config-data\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dae07283-7914-42d0-be9e-93d61eb88267-etc-swift\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.154501 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-internal-tls-certs\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.155334 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbhq\" (UniqueName: \"kubernetes.io/projected/dae07283-7914-42d0-be9e-93d61eb88267-kube-api-access-qjbhq\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257187 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjbhq\" (UniqueName: \"kubernetes.io/projected/dae07283-7914-42d0-be9e-93d61eb88267-kube-api-access-qjbhq\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-combined-ca-bundle\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae07283-7914-42d0-be9e-93d61eb88267-log-httpd\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257762 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-public-tls-certs\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae07283-7914-42d0-be9e-93d61eb88267-run-httpd\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-config-data\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dae07283-7914-42d0-be9e-93d61eb88267-etc-swift\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.257935 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-internal-tls-certs\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.260483 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae07283-7914-42d0-be9e-93d61eb88267-run-httpd\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.263730 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dae07283-7914-42d0-be9e-93d61eb88267-log-httpd\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.265382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-combined-ca-bundle\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.265567 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dae07283-7914-42d0-be9e-93d61eb88267-etc-swift\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.265567 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-config-data\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.269043 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-public-tls-certs\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.277245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dae07283-7914-42d0-be9e-93d61eb88267-internal-tls-certs\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.280759 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjbhq\" (UniqueName: \"kubernetes.io/projected/dae07283-7914-42d0-be9e-93d61eb88267-kube-api-access-qjbhq\") pod \"swift-proxy-5d4ff448c-rqtwt\" (UID: \"dae07283-7914-42d0-be9e-93d61eb88267\") " pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.450510 4833 generic.go:334] "Generic (PLEG): container finished" podID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerID="d59263671635fc23889d2320fe953afdbf5ff885db52eac54bf471df725bccfb" exitCode=0 Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.450537 4833 generic.go:334] "Generic (PLEG): container finished" podID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerID="37bc54429fefde564fa395911ac41ea155fd9d725634309d0a5da7db2c516364" exitCode=0 Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.450558 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerDied","Data":"d59263671635fc23889d2320fe953afdbf5ff885db52eac54bf471df725bccfb"} Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.450582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerDied","Data":"37bc54429fefde564fa395911ac41ea155fd9d725634309d0a5da7db2c516364"} Jan 27 14:33:30 crc kubenswrapper[4833]: I0127 14:33:30.576146 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:32 crc kubenswrapper[4833]: I0127 14:33:32.260226 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.070645 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236046 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-log-httpd\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236145 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-combined-ca-bundle\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236277 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sw72\" (UniqueName: \"kubernetes.io/projected/c6190735-a3c8-46a0-9126-72ea1f36db34-kube-api-access-5sw72\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236306 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-run-httpd\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236354 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-scripts\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-sg-core-conf-yaml\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.236492 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-config-data\") pod \"c6190735-a3c8-46a0-9126-72ea1f36db34\" (UID: \"c6190735-a3c8-46a0-9126-72ea1f36db34\") " Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.237131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.237437 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.243145 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-scripts" (OuterVolumeSpecName: "scripts") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.252481 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6190735-a3c8-46a0-9126-72ea1f36db34-kube-api-access-5sw72" (OuterVolumeSpecName: "kube-api-access-5sw72") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "kube-api-access-5sw72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.298365 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.339076 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sw72\" (UniqueName: \"kubernetes.io/projected/c6190735-a3c8-46a0-9126-72ea1f36db34-kube-api-access-5sw72\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.339121 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.339134 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.339145 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.339157 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6190735-a3c8-46a0-9126-72ea1f36db34-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.348659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.378113 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-config-data" (OuterVolumeSpecName: "config-data") pod "c6190735-a3c8-46a0-9126-72ea1f36db34" (UID: "c6190735-a3c8-46a0-9126-72ea1f36db34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.386852 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d4ff448c-rqtwt"] Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.440622 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.440646 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6190735-a3c8-46a0-9126-72ea1f36db34-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.493426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6190735-a3c8-46a0-9126-72ea1f36db34","Type":"ContainerDied","Data":"40051371d073576ddfb3a971e2096eef516fbae9ebac8e6e68d98869a117263f"} Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.493492 4833 scope.go:117] "RemoveContainer" containerID="383d2f2ff79abb76a13e58f93acdbbfd0b62b259e251d55c1633dacb41f22a02" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.493525 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.498848 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6a29332d-3473-40f9-950e-9cd9249a37aa","Type":"ContainerStarted","Data":"017320025aeeb7be301f6f9afe00488f5d42f727903d207ceeb018578fa49cc5"} Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.503802 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d4ff448c-rqtwt" event={"ID":"dae07283-7914-42d0-be9e-93d61eb88267","Type":"ContainerStarted","Data":"3f5b43bdded95eed00928d303ce806047295bb236baed00b6b529a2f6cfb59c5"} Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.529379 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.24658372 podStartE2EDuration="12.529352131s" podCreationTimestamp="2026-01-27 14:33:22 +0000 UTC" firstStartedPulling="2026-01-27 14:33:23.568057134 +0000 UTC m=+1305.219381526" lastFinishedPulling="2026-01-27 14:33:33.850825535 +0000 UTC m=+1315.502149937" observedRunningTime="2026-01-27 14:33:34.51991038 +0000 UTC m=+1316.171234782" watchObservedRunningTime="2026-01-27 14:33:34.529352131 +0000 UTC m=+1316.180676533" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.566098 4833 scope.go:117] "RemoveContainer" containerID="811a56266c12fd0dfc50bb6f7112011b47fc5e7d0f6f51276f9d759fcb1c09b0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.582498 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.592944 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.602664 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:34 crc kubenswrapper[4833]: E0127 14:33:34.603023 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-central-agent" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603040 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-central-agent" Jan 27 14:33:34 crc kubenswrapper[4833]: E0127 14:33:34.603055 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-notification-agent" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603062 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-notification-agent" Jan 27 14:33:34 crc kubenswrapper[4833]: E0127 14:33:34.603080 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="sg-core" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603086 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="sg-core" Jan 27 14:33:34 crc kubenswrapper[4833]: E0127 14:33:34.603102 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="proxy-httpd" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603107 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="proxy-httpd" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603275 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-central-agent" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603291 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="proxy-httpd" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603316 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="ceilometer-notification-agent" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.603334 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" containerName="sg-core" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.609470 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.615463 4833 scope.go:117] "RemoveContainer" containerID="d59263671635fc23889d2320fe953afdbf5ff885db52eac54bf471df725bccfb" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.632615 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.632642 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.654883 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.659620 4833 scope.go:117] "RemoveContainer" containerID="37bc54429fefde564fa395911ac41ea155fd9d725634309d0a5da7db2c516364" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746538 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-run-httpd\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746640 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-config-data\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746662 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-log-httpd\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746699 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-scripts\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746803 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746883 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzwv4\" (UniqueName: \"kubernetes.io/projected/6d1b13fa-d037-4446-997b-beedc94e3e6c-kube-api-access-qzwv4\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.746916 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-run-httpd\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-log-httpd\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848353 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-config-data\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848384 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-scripts\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848874 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-log-httpd\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.848969 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-run-httpd\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.849130 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.849225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzwv4\" (UniqueName: \"kubernetes.io/projected/6d1b13fa-d037-4446-997b-beedc94e3e6c-kube-api-access-qzwv4\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.853075 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-scripts\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.853985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.855362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.858686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-config-data\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.870057 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzwv4\" (UniqueName: \"kubernetes.io/projected/6d1b13fa-d037-4446-997b-beedc94e3e6c-kube-api-access-qzwv4\") pod \"ceilometer-0\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.941472 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:34 crc kubenswrapper[4833]: I0127 14:33:34.951721 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.052454 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-logs\") pod \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.052510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-config-data\") pod \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.052604 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-custom-prometheus-ca\") pod \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.052732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-combined-ca-bundle\") pod \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.052795 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h65n\" (UniqueName: \"kubernetes.io/projected/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-kube-api-access-9h65n\") pod \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\" (UID: \"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb\") " Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.054749 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-logs" (OuterVolumeSpecName: "logs") pod "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" (UID: "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.059058 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-kube-api-access-9h65n" (OuterVolumeSpecName: "kube-api-access-9h65n") pod "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" (UID: "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb"). InnerVolumeSpecName "kube-api-access-9h65n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.091576 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" (UID: "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.107571 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" (UID: "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.133700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-config-data" (OuterVolumeSpecName: "config-data") pod "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" (UID: "c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.156556 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.156851 4833 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.156862 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.156870 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h65n\" (UniqueName: \"kubernetes.io/projected/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-kube-api-access-9h65n\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.156879 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.253375 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6190735-a3c8-46a0-9126-72ea1f36db34" path="/var/lib/kubelet/pods/c6190735-a3c8-46a0-9126-72ea1f36db34/volumes" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.319085 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.513705 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d4ff448c-rqtwt" event={"ID":"dae07283-7914-42d0-be9e-93d61eb88267","Type":"ContainerStarted","Data":"edf89d8684d68abc2d1b7c9645986094c82d9440f2d578ac315cfea0439c8e9d"} Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.513765 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d4ff448c-rqtwt" event={"ID":"dae07283-7914-42d0-be9e-93d61eb88267","Type":"ContainerStarted","Data":"79045862f2c419b5f6232f2c2e50f1be984a064d79fc2cab02164982c9465eae"} Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.513862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.514028 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.516540 4833 generic.go:334] "Generic (PLEG): container finished" podID="c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" containerID="586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571" exitCode=0 Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.516600 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.516605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb","Type":"ContainerDied","Data":"586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571"} Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.516748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb","Type":"ContainerDied","Data":"2d9e6dfa820afc4d5196f5a6830590ed86fb5a2bc72e17312a32fc69a92e38eb"} Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.516786 4833 scope.go:117] "RemoveContainer" containerID="586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.518318 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerStarted","Data":"9d44ffa283fb17171e20ff8254de0a48eabf4e3e831c4eb98c5a9244d1279b90"} Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.541462 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d4ff448c-rqtwt" podStartSLOduration=6.5414316580000005 podStartE2EDuration="6.541431658s" podCreationTimestamp="2026-01-27 14:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:35.536542678 +0000 UTC m=+1317.187867090" watchObservedRunningTime="2026-01-27 14:33:35.541431658 +0000 UTC m=+1317.192756060" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.544767 4833 scope.go:117] "RemoveContainer" containerID="586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571" Jan 27 14:33:35 crc kubenswrapper[4833]: E0127 14:33:35.545755 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571\": container with ID starting with 586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571 not found: ID does not exist" containerID="586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.545795 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571"} err="failed to get container status \"586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571\": rpc error: code = NotFound desc = could not find container \"586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571\": container with ID starting with 586c0dc08bdea62833c1cb7501dd1eaca2f77231e83aa3f29dd74c6b64c7a571 not found: ID does not exist" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.559910 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.571626 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.579407 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:33:35 crc kubenswrapper[4833]: E0127 14:33:35.579811 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" containerName="watcher-decision-engine" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.579825 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" containerName="watcher-decision-engine" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.580015 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" containerName="watcher-decision-engine" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.580638 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.585688 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.606092 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.666357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tn97\" (UniqueName: \"kubernetes.io/projected/099ff0ad-2eba-43e9-99e3-86fb4b476793-kube-api-access-5tn97\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.666421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.666528 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.666673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-config-data\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.666974 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff0ad-2eba-43e9-99e3-86fb4b476793-logs\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.753099 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.753561 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-httpd" containerID="cri-o://3a9fdb203e2d259b5c986b5aab84427317433c3a5996445f1eaabb1b237fb87d" gracePeriod=30 Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.753393 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-log" containerID="cri-o://5fb719ed1f26e8b15b731bd9effac6db8d4bb0c7a686eb5e9afaad1f272286e5" gracePeriod=30 Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.768622 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff0ad-2eba-43e9-99e3-86fb4b476793-logs\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.768994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tn97\" (UniqueName: \"kubernetes.io/projected/099ff0ad-2eba-43e9-99e3-86fb4b476793-kube-api-access-5tn97\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.769034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.769155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.769186 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-config-data\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.770697 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff0ad-2eba-43e9-99e3-86fb4b476793-logs\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.775417 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.775963 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-config-data\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.776416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/099ff0ad-2eba-43e9-99e3-86fb4b476793-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.801069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tn97\" (UniqueName: \"kubernetes.io/projected/099ff0ad-2eba-43e9-99e3-86fb4b476793-kube-api-access-5tn97\") pod \"watcher-decision-engine-0\" (UID: \"099ff0ad-2eba-43e9-99e3-86fb4b476793\") " pod="openstack/watcher-decision-engine-0" Jan 27 14:33:35 crc kubenswrapper[4833]: I0127 14:33:35.910086 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.151652 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 14:33:36 crc kubenswrapper[4833]: W0127 14:33:36.153639 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099ff0ad_2eba_43e9_99e3_86fb4b476793.slice/crio-ad77b866c0f7bed36c3a3fd25b5ea6f9248d48cf5d5e7a172b1220a4147fe15d WatchSource:0}: Error finding container ad77b866c0f7bed36c3a3fd25b5ea6f9248d48cf5d5e7a172b1220a4147fe15d: Status 404 returned error can't find the container with id ad77b866c0f7bed36c3a3fd25b5ea6f9248d48cf5d5e7a172b1220a4147fe15d Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.188352 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54f64dd7dd-8w4dp" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.188490 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.566552 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerStarted","Data":"3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a"} Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.575044 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"099ff0ad-2eba-43e9-99e3-86fb4b476793","Type":"ContainerStarted","Data":"12b294d2d6ab0de1139a364c4dab7759fd810bf0ffbd5d568b5d06f1d849dac8"} Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.575095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"099ff0ad-2eba-43e9-99e3-86fb4b476793","Type":"ContainerStarted","Data":"ad77b866c0f7bed36c3a3fd25b5ea6f9248d48cf5d5e7a172b1220a4147fe15d"} Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.589028 4833 generic.go:334] "Generic (PLEG): container finished" podID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerID="5fb719ed1f26e8b15b731bd9effac6db8d4bb0c7a686eb5e9afaad1f272286e5" exitCode=143 Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.590021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae6adef2-48b4-4342-a13e-6b2541eeeff1","Type":"ContainerDied","Data":"5fb719ed1f26e8b15b731bd9effac6db8d4bb0c7a686eb5e9afaad1f272286e5"} Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.602659 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=1.6026405910000001 podStartE2EDuration="1.602640591s" podCreationTimestamp="2026-01-27 14:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:36.598342535 +0000 UTC m=+1318.249666937" watchObservedRunningTime="2026-01-27 14:33:36.602640591 +0000 UTC m=+1318.253964993" Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.865086 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.865391 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-log" containerID="cri-o://9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3" gracePeriod=30 Jan 27 14:33:36 crc kubenswrapper[4833]: I0127 14:33:36.866367 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-httpd" containerID="cri-o://ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12" gracePeriod=30 Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.196588 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-v8nhd"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.199499 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.262911 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb" path="/var/lib/kubelet/pods/c1e84c4a-5a0c-473c-bc1f-ed52ac0994bb/volumes" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.264129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v8nhd"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.298287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355b8f4b-7773-4472-a1a9-84ee61cba511-operator-scripts\") pod \"nova-api-db-create-v8nhd\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.298568 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76mw\" (UniqueName: \"kubernetes.io/projected/355b8f4b-7773-4472-a1a9-84ee61cba511-kube-api-access-p76mw\") pod \"nova-api-db-create-v8nhd\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.304338 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cgrlm"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.310799 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.323321 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cgrlm"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.400224 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-operator-scripts\") pod \"nova-cell0-db-create-cgrlm\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.400290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgd4b\" (UniqueName: \"kubernetes.io/projected/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-kube-api-access-cgd4b\") pod \"nova-cell0-db-create-cgrlm\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.400376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355b8f4b-7773-4472-a1a9-84ee61cba511-operator-scripts\") pod \"nova-api-db-create-v8nhd\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.400655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p76mw\" (UniqueName: \"kubernetes.io/projected/355b8f4b-7773-4472-a1a9-84ee61cba511-kube-api-access-p76mw\") pod \"nova-api-db-create-v8nhd\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.401096 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355b8f4b-7773-4472-a1a9-84ee61cba511-operator-scripts\") pod \"nova-api-db-create-v8nhd\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.428478 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8f54-account-create-update-hsskq"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.429713 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.434198 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8f54-account-create-update-hsskq"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.453557 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.456849 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76mw\" (UniqueName: \"kubernetes.io/projected/355b8f4b-7773-4472-a1a9-84ee61cba511-kube-api-access-p76mw\") pod \"nova-api-db-create-v8nhd\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.504368 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgd4b\" (UniqueName: \"kubernetes.io/projected/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-kube-api-access-cgd4b\") pod \"nova-cell0-db-create-cgrlm\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.504490 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgx6b\" (UniqueName: \"kubernetes.io/projected/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-kube-api-access-vgx6b\") pod \"nova-api-8f54-account-create-update-hsskq\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.504544 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-operator-scripts\") pod \"nova-api-8f54-account-create-update-hsskq\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.504608 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-operator-scripts\") pod \"nova-cell0-db-create-cgrlm\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.505261 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-operator-scripts\") pod \"nova-cell0-db-create-cgrlm\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.514591 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-czh77"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.516501 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.520664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.525497 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgd4b\" (UniqueName: \"kubernetes.io/projected/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-kube-api-access-cgd4b\") pod \"nova-cell0-db-create-cgrlm\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.532782 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-czh77"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.605959 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgtd\" (UniqueName: \"kubernetes.io/projected/03ce025e-318d-4abf-accf-2a7a35d7ec0b-kube-api-access-9bgtd\") pod \"nova-cell1-db-create-czh77\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.606027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgx6b\" (UniqueName: \"kubernetes.io/projected/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-kube-api-access-vgx6b\") pod \"nova-api-8f54-account-create-update-hsskq\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.606092 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-operator-scripts\") pod \"nova-api-8f54-account-create-update-hsskq\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.606134 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ce025e-318d-4abf-accf-2a7a35d7ec0b-operator-scripts\") pod \"nova-cell1-db-create-czh77\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.607104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-operator-scripts\") pod \"nova-api-8f54-account-create-update-hsskq\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.608427 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1995-account-create-update-7x427"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.610761 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.615269 4833 generic.go:334] "Generic (PLEG): container finished" podID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerID="9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3" exitCode=143 Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.616126 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4c3b8c8-1c2e-435d-8380-1374792be064","Type":"ContainerDied","Data":"9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3"} Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.622561 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.632128 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.638324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgx6b\" (UniqueName: \"kubernetes.io/projected/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-kube-api-access-vgx6b\") pod \"nova-api-8f54-account-create-update-hsskq\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.642586 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1995-account-create-update-7x427"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.710464 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgtd\" (UniqueName: \"kubernetes.io/projected/03ce025e-318d-4abf-accf-2a7a35d7ec0b-kube-api-access-9bgtd\") pod \"nova-cell1-db-create-czh77\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.710551 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdtj\" (UniqueName: \"kubernetes.io/projected/db269ee5-31fd-4d2a-83db-abe3047254fd-kube-api-access-rwdtj\") pod \"nova-cell0-1995-account-create-update-7x427\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.710623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ce025e-318d-4abf-accf-2a7a35d7ec0b-operator-scripts\") pod \"nova-cell1-db-create-czh77\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.710683 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db269ee5-31fd-4d2a-83db-abe3047254fd-operator-scripts\") pod \"nova-cell0-1995-account-create-update-7x427\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.712044 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ce025e-318d-4abf-accf-2a7a35d7ec0b-operator-scripts\") pod \"nova-cell1-db-create-czh77\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.748068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgtd\" (UniqueName: \"kubernetes.io/projected/03ce025e-318d-4abf-accf-2a7a35d7ec0b-kube-api-access-9bgtd\") pod \"nova-cell1-db-create-czh77\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.802349 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.815988 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db269ee5-31fd-4d2a-83db-abe3047254fd-operator-scripts\") pod \"nova-cell0-1995-account-create-update-7x427\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.816118 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdtj\" (UniqueName: \"kubernetes.io/projected/db269ee5-31fd-4d2a-83db-abe3047254fd-kube-api-access-rwdtj\") pod \"nova-cell0-1995-account-create-update-7x427\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.817300 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db269ee5-31fd-4d2a-83db-abe3047254fd-operator-scripts\") pod \"nova-cell0-1995-account-create-update-7x427\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.825974 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9458-account-create-update-2ndkv"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.827524 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.832533 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.840829 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdtj\" (UniqueName: \"kubernetes.io/projected/db269ee5-31fd-4d2a-83db-abe3047254fd-kube-api-access-rwdtj\") pod \"nova-cell0-1995-account-create-update-7x427\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.848169 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9458-account-create-update-2ndkv"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.864091 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.911028 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.911239 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="534b7701-45c8-403b-8601-3e22e9177c61" containerName="kube-state-metrics" containerID="cri-o://46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3" gracePeriod=30 Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.919776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b245df-2c38-4337-9a1b-41c31fc88e1c-operator-scripts\") pod \"nova-cell1-9458-account-create-update-2ndkv\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.919831 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl7f\" (UniqueName: \"kubernetes.io/projected/89b245df-2c38-4337-9a1b-41c31fc88e1c-kube-api-access-mnl7f\") pod \"nova-cell1-9458-account-create-update-2ndkv\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.944060 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:37 crc kubenswrapper[4833]: I0127 14:33:37.944349 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.021828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b245df-2c38-4337-9a1b-41c31fc88e1c-operator-scripts\") pod \"nova-cell1-9458-account-create-update-2ndkv\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.021881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl7f\" (UniqueName: \"kubernetes.io/projected/89b245df-2c38-4337-9a1b-41c31fc88e1c-kube-api-access-mnl7f\") pod \"nova-cell1-9458-account-create-update-2ndkv\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.023281 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b245df-2c38-4337-9a1b-41c31fc88e1c-operator-scripts\") pod \"nova-cell1-9458-account-create-update-2ndkv\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.051657 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl7f\" (UniqueName: \"kubernetes.io/projected/89b245df-2c38-4337-9a1b-41c31fc88e1c-kube-api-access-mnl7f\") pod \"nova-cell1-9458-account-create-update-2ndkv\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.076201 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.085683 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-v8nhd"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.260628 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cgrlm"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.412811 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8f54-account-create-update-hsskq"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.513240 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.618991 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1995-account-create-update-7x427"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.635386 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czx9k\" (UniqueName: \"kubernetes.io/projected/534b7701-45c8-403b-8601-3e22e9177c61-kube-api-access-czx9k\") pod \"534b7701-45c8-403b-8601-3e22e9177c61\" (UID: \"534b7701-45c8-403b-8601-3e22e9177c61\") " Jan 27 14:33:38 crc kubenswrapper[4833]: W0127 14:33:38.652521 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb269ee5_31fd_4d2a_83db_abe3047254fd.slice/crio-49fc1ce656efb3bfea0c8815582626f71e08f7c1d001c54e524e494dcc475317 WatchSource:0}: Error finding container 49fc1ce656efb3bfea0c8815582626f71e08f7c1d001c54e524e494dcc475317: Status 404 returned error can't find the container with id 49fc1ce656efb3bfea0c8815582626f71e08f7c1d001c54e524e494dcc475317 Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.652928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f54-account-create-update-hsskq" event={"ID":"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53","Type":"ContainerStarted","Data":"c50afcc3b43ec8a9ffd1c85feaafddb48dc66243f3157566c318d5343a997b44"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.659629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerStarted","Data":"34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.661912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534b7701-45c8-403b-8601-3e22e9177c61-kube-api-access-czx9k" (OuterVolumeSpecName: "kube-api-access-czx9k") pod "534b7701-45c8-403b-8601-3e22e9177c61" (UID: "534b7701-45c8-403b-8601-3e22e9177c61"). InnerVolumeSpecName "kube-api-access-czx9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.662998 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cgrlm" event={"ID":"7ca61e50-dbbc-4e99-ad42-9f769a410a6d","Type":"ContainerStarted","Data":"bd35fdcfe1eb54d2cdfea6987fbe76aeb3539e2fa41f22c279b1bd1b1822ee97"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.663045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cgrlm" event={"ID":"7ca61e50-dbbc-4e99-ad42-9f769a410a6d","Type":"ContainerStarted","Data":"85ca2c745f7f92861e986a71060314005a03c92c3bd71973f562a0077a6d3201"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.669542 4833 generic.go:334] "Generic (PLEG): container finished" podID="534b7701-45c8-403b-8601-3e22e9177c61" containerID="46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3" exitCode=2 Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.669642 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"534b7701-45c8-403b-8601-3e22e9177c61","Type":"ContainerDied","Data":"46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.669671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"534b7701-45c8-403b-8601-3e22e9177c61","Type":"ContainerDied","Data":"3483705b20007cf08a33addeb5147c3a3601d47e718c824101f5cd77f483666d"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.669706 4833 scope.go:117] "RemoveContainer" containerID="46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.669871 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.687542 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v8nhd" event={"ID":"355b8f4b-7773-4472-a1a9-84ee61cba511","Type":"ContainerStarted","Data":"67b4a282153dd24a6e7803e64eb4f9c35bc5b57a2b224d1b50094194025b0bb2"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.689388 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v8nhd" event={"ID":"355b8f4b-7773-4472-a1a9-84ee61cba511","Type":"ContainerStarted","Data":"bc3af4f48fe1507d68fb865608c86a9a2133950ee2a7f36ff532da86aac45ab7"} Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.728632 4833 scope.go:117] "RemoveContainer" containerID="46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3" Jan 27 14:33:38 crc kubenswrapper[4833]: E0127 14:33:38.730024 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3\": container with ID starting with 46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3 not found: ID does not exist" containerID="46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.730067 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3"} err="failed to get container status \"46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3\": rpc error: code = NotFound desc = could not find container \"46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3\": container with ID starting with 46bc86dda0355a68993706c5d55430c9c0a66182d03db354bbcfaac03b39a9d3 not found: ID does not exist" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.738157 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czx9k\" (UniqueName: \"kubernetes.io/projected/534b7701-45c8-403b-8601-3e22e9177c61-kube-api-access-czx9k\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.747683 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-v8nhd" podStartSLOduration=1.747661456 podStartE2EDuration="1.747661456s" podCreationTimestamp="2026-01-27 14:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:38.731769001 +0000 UTC m=+1320.383093413" watchObservedRunningTime="2026-01-27 14:33:38.747661456 +0000 UTC m=+1320.398985858" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.784363 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.819824 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.830681 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:33:38 crc kubenswrapper[4833]: E0127 14:33:38.831373 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534b7701-45c8-403b-8601-3e22e9177c61" containerName="kube-state-metrics" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.831386 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="534b7701-45c8-403b-8601-3e22e9177c61" containerName="kube-state-metrics" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.831587 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="534b7701-45c8-403b-8601-3e22e9177c61" containerName="kube-state-metrics" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.833497 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.836479 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.836496 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.845221 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.884151 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-czh77"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.901631 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9458-account-create-update-2ndkv"] Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.942798 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.942861 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.943055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrw2\" (UniqueName: \"kubernetes.io/projected/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-api-access-plrw2\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:38 crc kubenswrapper[4833]: I0127 14:33:38.943088 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.045563 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plrw2\" (UniqueName: \"kubernetes.io/projected/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-api-access-plrw2\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.045618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.045699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.045722 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.052968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.053899 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.057177 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.070090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plrw2\" (UniqueName: \"kubernetes.io/projected/f38cf44c-23a5-428d-8598-ec073d2148bf-kube-api-access-plrw2\") pod \"kube-state-metrics-0\" (UID: \"f38cf44c-23a5-428d-8598-ec073d2148bf\") " pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.247173 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534b7701-45c8-403b-8601-3e22e9177c61" path="/var/lib/kubelet/pods/534b7701-45c8-403b-8601-3e22e9177c61/volumes" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.274940 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:33:39 crc kubenswrapper[4833]: E0127 14:33:39.555154 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb30e6b9_8bcf_4e87_9642_9ed79d5a6e53.slice/crio-bb41b043dfc8781773cbe0cd7dff2cc915090964c29161bcdf1c51574ace54d8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ca61e50_dbbc_4e99_ad42_9f769a410a6d.slice/crio-bd35fdcfe1eb54d2cdfea6987fbe76aeb3539e2fa41f22c279b1bd1b1822ee97.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ca61e50_dbbc_4e99_ad42_9f769a410a6d.slice/crio-conmon-bd35fdcfe1eb54d2cdfea6987fbe76aeb3539e2fa41f22c279b1bd1b1822ee97.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.704110 4833 generic.go:334] "Generic (PLEG): container finished" podID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerID="3a9fdb203e2d259b5c986b5aab84427317433c3a5996445f1eaabb1b237fb87d" exitCode=0 Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.704219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae6adef2-48b4-4342-a13e-6b2541eeeff1","Type":"ContainerDied","Data":"3a9fdb203e2d259b5c986b5aab84427317433c3a5996445f1eaabb1b237fb87d"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.705383 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" event={"ID":"89b245df-2c38-4337-9a1b-41c31fc88e1c","Type":"ContainerStarted","Data":"f121fde75e59ffb51f43d7bcd001d66d384ebc23aa043d4c978d053076bfbcd8"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.707168 4833 generic.go:334] "Generic (PLEG): container finished" podID="355b8f4b-7773-4472-a1a9-84ee61cba511" containerID="67b4a282153dd24a6e7803e64eb4f9c35bc5b57a2b224d1b50094194025b0bb2" exitCode=0 Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.707272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v8nhd" event={"ID":"355b8f4b-7773-4472-a1a9-84ee61cba511","Type":"ContainerDied","Data":"67b4a282153dd24a6e7803e64eb4f9c35bc5b57a2b224d1b50094194025b0bb2"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.709074 4833 generic.go:334] "Generic (PLEG): container finished" podID="fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" containerID="bb41b043dfc8781773cbe0cd7dff2cc915090964c29161bcdf1c51574ace54d8" exitCode=0 Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.709138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f54-account-create-update-hsskq" event={"ID":"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53","Type":"ContainerDied","Data":"bb41b043dfc8781773cbe0cd7dff2cc915090964c29161bcdf1c51574ace54d8"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.711532 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-czh77" event={"ID":"03ce025e-318d-4abf-accf-2a7a35d7ec0b","Type":"ContainerStarted","Data":"91445dff079c934b107445cabb58fc80c81f7384bb40c515bac307486cae3b7f"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.720917 4833 generic.go:334] "Generic (PLEG): container finished" podID="7ca61e50-dbbc-4e99-ad42-9f769a410a6d" containerID="bd35fdcfe1eb54d2cdfea6987fbe76aeb3539e2fa41f22c279b1bd1b1822ee97" exitCode=0 Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.721184 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cgrlm" event={"ID":"7ca61e50-dbbc-4e99-ad42-9f769a410a6d","Type":"ContainerDied","Data":"bd35fdcfe1eb54d2cdfea6987fbe76aeb3539e2fa41f22c279b1bd1b1822ee97"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.722966 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1995-account-create-update-7x427" event={"ID":"db269ee5-31fd-4d2a-83db-abe3047254fd","Type":"ContainerStarted","Data":"49fc1ce656efb3bfea0c8815582626f71e08f7c1d001c54e524e494dcc475317"} Jan 27 14:33:39 crc kubenswrapper[4833]: I0127 14:33:39.783643 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:33:39 crc kubenswrapper[4833]: W0127 14:33:39.785059 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf38cf44c_23a5_428d_8598_ec073d2148bf.slice/crio-37d1819f9125e5bb8943098b0970819030a4fb87dc3fe6f89b3b7df2d2843b89 WatchSource:0}: Error finding container 37d1819f9125e5bb8943098b0970819030a4fb87dc3fe6f89b3b7df2d2843b89: Status 404 returned error can't find the container with id 37d1819f9125e5bb8943098b0970819030a4fb87dc3fe6f89b3b7df2d2843b89 Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.399749 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.485433 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-logs\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.485499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-public-tls-certs\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.486021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-httpd-run\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.486095 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-scripts\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.486166 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7krm\" (UniqueName: \"kubernetes.io/projected/ae6adef2-48b4-4342-a13e-6b2541eeeff1-kube-api-access-t7krm\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.488351 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-combined-ca-bundle\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.488576 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.488655 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-config-data\") pod \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\" (UID: \"ae6adef2-48b4-4342-a13e-6b2541eeeff1\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.497081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.503140 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.503291 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-logs" (OuterVolumeSpecName: "logs") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.506255 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6adef2-48b4-4342-a13e-6b2541eeeff1-kube-api-access-t7krm" (OuterVolumeSpecName: "kube-api-access-t7krm") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "kube-api-access-t7krm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.506598 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-scripts" (OuterVolumeSpecName: "scripts") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.554272 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.591690 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.591730 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae6adef2-48b4-4342-a13e-6b2541eeeff1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.591740 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.591751 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7krm\" (UniqueName: \"kubernetes.io/projected/ae6adef2-48b4-4342-a13e-6b2541eeeff1-kube-api-access-t7krm\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.591764 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.591790 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.592830 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.593462 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d4ff448c-rqtwt" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.605304 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-config-data" (OuterVolumeSpecName: "config-data") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.642983 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.656699 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.663922 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae6adef2-48b4-4342-a13e-6b2541eeeff1" (UID: "ae6adef2-48b4-4342-a13e-6b2541eeeff1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.694114 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.694146 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.694157 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae6adef2-48b4-4342-a13e-6b2541eeeff1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.754129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerStarted","Data":"597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.772799 4833 generic.go:334] "Generic (PLEG): container finished" podID="03ce025e-318d-4abf-accf-2a7a35d7ec0b" containerID="282f66dc67ecfafc95ab4c5d9ac8ef292a19d369e4f7c3c12d0ba9978fba5924" exitCode=0 Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.772849 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-czh77" event={"ID":"03ce025e-318d-4abf-accf-2a7a35d7ec0b","Type":"ContainerDied","Data":"282f66dc67ecfafc95ab4c5d9ac8ef292a19d369e4f7c3c12d0ba9978fba5924"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.786326 4833 generic.go:334] "Generic (PLEG): container finished" podID="db269ee5-31fd-4d2a-83db-abe3047254fd" containerID="0544b1ee1eaafdb570f60f30f7df478a62aef6caa0229e215abd3394a3d19a72" exitCode=0 Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.786601 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1995-account-create-update-7x427" event={"ID":"db269ee5-31fd-4d2a-83db-abe3047254fd","Type":"ContainerDied","Data":"0544b1ee1eaafdb570f60f30f7df478a62aef6caa0229e215abd3394a3d19a72"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.794771 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-httpd-run\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.794841 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bclgn\" (UniqueName: \"kubernetes.io/projected/d4c3b8c8-1c2e-435d-8380-1374792be064-kube-api-access-bclgn\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.794875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.795031 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-internal-tls-certs\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.795060 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-config-data\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.795089 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-combined-ca-bundle\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.795115 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-logs\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.795153 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-scripts\") pod \"d4c3b8c8-1c2e-435d-8380-1374792be064\" (UID: \"d4c3b8c8-1c2e-435d-8380-1374792be064\") " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.796429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.802264 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-logs" (OuterVolumeSpecName: "logs") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.803745 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae6adef2-48b4-4342-a13e-6b2541eeeff1","Type":"ContainerDied","Data":"7fe6c40ebcc01e5254c1042555b14a2f960cad9846877f482ccd0223f4b004e5"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.803784 4833 scope.go:117] "RemoveContainer" containerID="3a9fdb203e2d259b5c986b5aab84427317433c3a5996445f1eaabb1b237fb87d" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.803928 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.806687 4833 generic.go:334] "Generic (PLEG): container finished" podID="89b245df-2c38-4337-9a1b-41c31fc88e1c" containerID="3e5abe434e9b737f9d7af793cd27e1b2c513456790c8f9d7758d13757432594f" exitCode=0 Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.806737 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" event={"ID":"89b245df-2c38-4337-9a1b-41c31fc88e1c","Type":"ContainerDied","Data":"3e5abe434e9b737f9d7af793cd27e1b2c513456790c8f9d7758d13757432594f"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.807969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f38cf44c-23a5-428d-8598-ec073d2148bf","Type":"ContainerStarted","Data":"37d1819f9125e5bb8943098b0970819030a4fb87dc3fe6f89b3b7df2d2843b89"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.808725 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.811080 4833 generic.go:334] "Generic (PLEG): container finished" podID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerID="ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12" exitCode=0 Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.811241 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.811975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4c3b8c8-1c2e-435d-8380-1374792be064","Type":"ContainerDied","Data":"ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.812000 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d4c3b8c8-1c2e-435d-8380-1374792be064","Type":"ContainerDied","Data":"ece419e3fa97f5b7a5a8c286d0f8a968801ef13946a830982216463a38a7e43a"} Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.816244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.839218 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-scripts" (OuterVolumeSpecName: "scripts") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.846622 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c3b8c8-1c2e-435d-8380-1374792be064-kube-api-access-bclgn" (OuterVolumeSpecName: "kube-api-access-bclgn") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "kube-api-access-bclgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.895570 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.897792 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.897809 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.897818 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.897826 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c3b8c8-1c2e-435d-8380-1374792be064-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.897834 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bclgn\" (UniqueName: \"kubernetes.io/projected/d4c3b8c8-1c2e-435d-8380-1374792be064-kube-api-access-bclgn\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.897853 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.901152 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.327224675 podStartE2EDuration="2.901137461s" podCreationTimestamp="2026-01-27 14:33:38 +0000 UTC" firstStartedPulling="2026-01-27 14:33:39.78779087 +0000 UTC m=+1321.439115272" lastFinishedPulling="2026-01-27 14:33:40.361703656 +0000 UTC m=+1322.013028058" observedRunningTime="2026-01-27 14:33:40.8496596 +0000 UTC m=+1322.500984002" watchObservedRunningTime="2026-01-27 14:33:40.901137461 +0000 UTC m=+1322.552461863" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.939712 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.957648 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 14:33:40 crc kubenswrapper[4833]: I0127 14:33:40.970136 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-config-data" (OuterVolumeSpecName: "config-data") pod "d4c3b8c8-1c2e-435d-8380-1374792be064" (UID: "d4c3b8c8-1c2e-435d-8380-1374792be064"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:40.999816 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:40.999843 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c3b8c8-1c2e-435d-8380-1374792be064-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:40.999853 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.108539 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.126458 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.126948 4833 scope.go:117] "RemoveContainer" containerID="5fb719ed1f26e8b15b731bd9effac6db8d4bb0c7a686eb5e9afaad1f272286e5" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.139641 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: E0127 14:33:41.140062 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-log" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140073 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-log" Jan 27 14:33:41 crc kubenswrapper[4833]: E0127 14:33:41.140096 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-log" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140103 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-log" Jan 27 14:33:41 crc kubenswrapper[4833]: E0127 14:33:41.140126 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-httpd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140132 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-httpd" Jan 27 14:33:41 crc kubenswrapper[4833]: E0127 14:33:41.140143 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-httpd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140155 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-httpd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140317 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-log" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140333 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" containerName="glance-httpd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140349 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-log" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.140363 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" containerName="glance-httpd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.141295 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.143641 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7fbqk" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.143722 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.147647 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.147917 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.162808 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.195941 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.204566 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205051 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205150 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa86b2e-2264-479f-861c-3d03d6e5edd4-logs\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fa86b2e-2264-479f-861c-3d03d6e5edd4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205528 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.205663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jpk6\" (UniqueName: \"kubernetes.io/projected/6fa86b2e-2264-479f-861c-3d03d6e5edd4-kube-api-access-2jpk6\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.243229 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6adef2-48b4-4342-a13e-6b2541eeeff1" path="/var/lib/kubelet/pods/ae6adef2-48b4-4342-a13e-6b2541eeeff1/volumes" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.243948 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.253214 4833 scope.go:117] "RemoveContainer" containerID="ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.267714 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.269982 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.277001 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.278432 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.283914 4833 scope.go:117] "RemoveContainer" containerID="9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.296248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.306921 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jpk6\" (UniqueName: \"kubernetes.io/projected/6fa86b2e-2264-479f-861c-3d03d6e5edd4-kube-api-access-2jpk6\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307004 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307048 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307098 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa86b2e-2264-479f-861c-3d03d6e5edd4-logs\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307171 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fa86b2e-2264-479f-861c-3d03d6e5edd4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307338 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.307738 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.310817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fa86b2e-2264-479f-861c-3d03d6e5edd4-logs\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.310973 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fa86b2e-2264-479f-861c-3d03d6e5edd4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.312990 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.316340 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.319949 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.331665 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jpk6\" (UniqueName: \"kubernetes.io/projected/6fa86b2e-2264-479f-861c-3d03d6e5edd4-kube-api-access-2jpk6\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.333575 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fa86b2e-2264-479f-861c-3d03d6e5edd4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.346668 4833 scope.go:117] "RemoveContainer" containerID="ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12" Jan 27 14:33:41 crc kubenswrapper[4833]: E0127 14:33:41.347829 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12\": container with ID starting with ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12 not found: ID does not exist" containerID="ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.347880 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12"} err="failed to get container status \"ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12\": rpc error: code = NotFound desc = could not find container \"ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12\": container with ID starting with ebc49abbe65d28dc5a26411f511a756e1f8afb8afca49fc29a6b3adeb57e4a12 not found: ID does not exist" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.347917 4833 scope.go:117] "RemoveContainer" containerID="9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3" Jan 27 14:33:41 crc kubenswrapper[4833]: E0127 14:33:41.350057 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3\": container with ID starting with 9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3 not found: ID does not exist" containerID="9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.350089 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3"} err="failed to get container status \"9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3\": rpc error: code = NotFound desc = could not find container \"9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3\": container with ID starting with 9535cbc6658278f4510df7617b9cc757dcf21d1aa38465fbf00e796cf14430b3 not found: ID does not exist" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.368017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"6fa86b2e-2264-479f-861c-3d03d6e5edd4\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.370682 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p76mw\" (UniqueName: \"kubernetes.io/projected/355b8f4b-7773-4472-a1a9-84ee61cba511-kube-api-access-p76mw\") pod \"355b8f4b-7773-4472-a1a9-84ee61cba511\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355b8f4b-7773-4472-a1a9-84ee61cba511-operator-scripts\") pod \"355b8f4b-7773-4472-a1a9-84ee61cba511\" (UID: \"355b8f4b-7773-4472-a1a9-84ee61cba511\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcv9\" (UniqueName: \"kubernetes.io/projected/ab7d4e94-9da8-4042-a857-27194e95d788-kube-api-access-frcv9\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409605 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409628 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7d4e94-9da8-4042-a857-27194e95d788-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409705 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409772 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.409824 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab7d4e94-9da8-4042-a857-27194e95d788-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.410374 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355b8f4b-7773-4472-a1a9-84ee61cba511-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "355b8f4b-7773-4472-a1a9-84ee61cba511" (UID: "355b8f4b-7773-4472-a1a9-84ee61cba511"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.416579 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355b8f4b-7773-4472-a1a9-84ee61cba511-kube-api-access-p76mw" (OuterVolumeSpecName: "kube-api-access-p76mw") pod "355b8f4b-7773-4472-a1a9-84ee61cba511" (UID: "355b8f4b-7773-4472-a1a9-84ee61cba511"). InnerVolumeSpecName "kube-api-access-p76mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.495847 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512184 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512355 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab7d4e94-9da8-4042-a857-27194e95d788-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcv9\" (UniqueName: \"kubernetes.io/projected/ab7d4e94-9da8-4042-a857-27194e95d788-kube-api-access-frcv9\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512492 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7d4e94-9da8-4042-a857-27194e95d788-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512617 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355b8f4b-7773-4472-a1a9-84ee61cba511-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512675 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p76mw\" (UniqueName: \"kubernetes.io/projected/355b8f4b-7773-4472-a1a9-84ee61cba511-kube-api-access-p76mw\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.512911 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.513566 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab7d4e94-9da8-4042-a857-27194e95d788-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.514367 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7d4e94-9da8-4042-a857-27194e95d788-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.521232 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.522486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.523928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.526985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7d4e94-9da8-4042-a857-27194e95d788-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.532963 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcv9\" (UniqueName: \"kubernetes.io/projected/ab7d4e94-9da8-4042-a857-27194e95d788-kube-api-access-frcv9\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.592488 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.594161 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.610916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab7d4e94-9da8-4042-a857-27194e95d788\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.684253 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717413 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-operator-scripts\") pod \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717512 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgd4b\" (UniqueName: \"kubernetes.io/projected/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-kube-api-access-cgd4b\") pod \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717564 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgx6b\" (UniqueName: \"kubernetes.io/projected/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-kube-api-access-vgx6b\") pod \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\" (UID: \"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717595 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klmjh\" (UniqueName: \"kubernetes.io/projected/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-kube-api-access-klmjh\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717632 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-combined-ca-bundle\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717720 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-config-data\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-operator-scripts\") pod \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\" (UID: \"7ca61e50-dbbc-4e99-ad42-9f769a410a6d\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717834 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-logs\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-scripts\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717909 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-tls-certs\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.717942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-secret-key\") pod \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\" (UID: \"17c47588-5dcf-4028-b0f7-b650ab0d4f4e\") " Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.725962 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" (UID: "fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.726298 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-logs" (OuterVolumeSpecName: "logs") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.726809 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ca61e50-dbbc-4e99-ad42-9f769a410a6d" (UID: "7ca61e50-dbbc-4e99-ad42-9f769a410a6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.733854 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-kube-api-access-vgx6b" (OuterVolumeSpecName: "kube-api-access-vgx6b") pod "fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" (UID: "fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53"). InnerVolumeSpecName "kube-api-access-vgx6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.735669 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.743721 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-kube-api-access-klmjh" (OuterVolumeSpecName: "kube-api-access-klmjh") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "kube-api-access-klmjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.752648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-kube-api-access-cgd4b" (OuterVolumeSpecName: "kube-api-access-cgd4b") pod "7ca61e50-dbbc-4e99-ad42-9f769a410a6d" (UID: "7ca61e50-dbbc-4e99-ad42-9f769a410a6d"). InnerVolumeSpecName "kube-api-access-cgd4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820703 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820736 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820746 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgd4b\" (UniqueName: \"kubernetes.io/projected/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-kube-api-access-cgd4b\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820758 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgx6b\" (UniqueName: \"kubernetes.io/projected/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53-kube-api-access-vgx6b\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820767 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klmjh\" (UniqueName: \"kubernetes.io/projected/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-kube-api-access-klmjh\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820776 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ca61e50-dbbc-4e99-ad42-9f769a410a6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.820785 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.824401 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-config-data" (OuterVolumeSpecName: "config-data") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.916215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.922036 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.923593 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.929752 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cgrlm" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.931624 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cgrlm" event={"ID":"7ca61e50-dbbc-4e99-ad42-9f769a410a6d","Type":"ContainerDied","Data":"85ca2c745f7f92861e986a71060314005a03c92c3bd71973f562a0077a6d3201"} Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.931662 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ca2c745f7f92861e986a71060314005a03c92c3bd71973f562a0077a6d3201" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.942269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.978580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-v8nhd" event={"ID":"355b8f4b-7773-4472-a1a9-84ee61cba511","Type":"ContainerDied","Data":"bc3af4f48fe1507d68fb865608c86a9a2133950ee2a7f36ff532da86aac45ab7"} Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.978616 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3af4f48fe1507d68fb865608c86a9a2133950ee2a7f36ff532da86aac45ab7" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.978758 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-v8nhd" Jan 27 14:33:41 crc kubenswrapper[4833]: I0127 14:33:41.989536 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f38cf44c-23a5-428d-8598-ec073d2148bf","Type":"ContainerStarted","Data":"216713ff495cd9f0ccf9cd568cb657d0b80105ac5195eb4f6b6620082963d627"} Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.008881 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-scripts" (OuterVolumeSpecName: "scripts") pod "17c47588-5dcf-4028-b0f7-b650ab0d4f4e" (UID: "17c47588-5dcf-4028-b0f7-b650ab0d4f4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.014486 4833 generic.go:334] "Generic (PLEG): container finished" podID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerID="cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e" exitCode=137 Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.014810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f64dd7dd-8w4dp" event={"ID":"17c47588-5dcf-4028-b0f7-b650ab0d4f4e","Type":"ContainerDied","Data":"cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e"} Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.014837 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f64dd7dd-8w4dp" event={"ID":"17c47588-5dcf-4028-b0f7-b650ab0d4f4e","Type":"ContainerDied","Data":"2267442ea6ef27ee3b646c627c6e82143545fde7070e073fb070cfad96cdc012"} Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.014855 4833 scope.go:117] "RemoveContainer" containerID="5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.014964 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f64dd7dd-8w4dp" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.026616 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.026834 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.026911 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/17c47588-5dcf-4028-b0f7-b650ab0d4f4e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.039202 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f54-account-create-update-hsskq" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.039193 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f54-account-create-update-hsskq" event={"ID":"fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53","Type":"ContainerDied","Data":"c50afcc3b43ec8a9ffd1c85feaafddb48dc66243f3157566c318d5343a997b44"} Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.039314 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50afcc3b43ec8a9ffd1c85feaafddb48dc66243f3157566c318d5343a997b44" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.088250 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54f64dd7dd-8w4dp"] Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.150127 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54f64dd7dd-8w4dp"] Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.257286 4833 scope.go:117] "RemoveContainer" containerID="cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.337185 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.338048 4833 scope.go:117] "RemoveContainer" containerID="5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab" Jan 27 14:33:42 crc kubenswrapper[4833]: E0127 14:33:42.343795 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab\": container with ID starting with 5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab not found: ID does not exist" containerID="5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.343857 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab"} err="failed to get container status \"5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab\": rpc error: code = NotFound desc = could not find container \"5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab\": container with ID starting with 5b8e4732b0c79541123442eabe44e8dd837945a5852f59df06e2a97eebeae2ab not found: ID does not exist" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.343887 4833 scope.go:117] "RemoveContainer" containerID="cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e" Jan 27 14:33:42 crc kubenswrapper[4833]: E0127 14:33:42.345597 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e\": container with ID starting with cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e not found: ID does not exist" containerID="cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.345630 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e"} err="failed to get container status \"cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e\": rpc error: code = NotFound desc = could not find container \"cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e\": container with ID starting with cbe3d113d27ccb80438d909537362536cf79c2d2c5878865d69ff3bf6c96c34e not found: ID does not exist" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.440692 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.558949 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.566406 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db269ee5-31fd-4d2a-83db-abe3047254fd-operator-scripts\") pod \"db269ee5-31fd-4d2a-83db-abe3047254fd\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.566599 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwdtj\" (UniqueName: \"kubernetes.io/projected/db269ee5-31fd-4d2a-83db-abe3047254fd-kube-api-access-rwdtj\") pod \"db269ee5-31fd-4d2a-83db-abe3047254fd\" (UID: \"db269ee5-31fd-4d2a-83db-abe3047254fd\") " Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.567244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db269ee5-31fd-4d2a-83db-abe3047254fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db269ee5-31fd-4d2a-83db-abe3047254fd" (UID: "db269ee5-31fd-4d2a-83db-abe3047254fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.568080 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db269ee5-31fd-4d2a-83db-abe3047254fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.585621 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db269ee5-31fd-4d2a-83db-abe3047254fd-kube-api-access-rwdtj" (OuterVolumeSpecName: "kube-api-access-rwdtj") pod "db269ee5-31fd-4d2a-83db-abe3047254fd" (UID: "db269ee5-31fd-4d2a-83db-abe3047254fd"). InnerVolumeSpecName "kube-api-access-rwdtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.671174 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ce025e-318d-4abf-accf-2a7a35d7ec0b-operator-scripts\") pod \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.671287 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bgtd\" (UniqueName: \"kubernetes.io/projected/03ce025e-318d-4abf-accf-2a7a35d7ec0b-kube-api-access-9bgtd\") pod \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\" (UID: \"03ce025e-318d-4abf-accf-2a7a35d7ec0b\") " Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.671713 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ce025e-318d-4abf-accf-2a7a35d7ec0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03ce025e-318d-4abf-accf-2a7a35d7ec0b" (UID: "03ce025e-318d-4abf-accf-2a7a35d7ec0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.672827 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03ce025e-318d-4abf-accf-2a7a35d7ec0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.672854 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwdtj\" (UniqueName: \"kubernetes.io/projected/db269ee5-31fd-4d2a-83db-abe3047254fd-kube-api-access-rwdtj\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.679017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ce025e-318d-4abf-accf-2a7a35d7ec0b-kube-api-access-9bgtd" (OuterVolumeSpecName: "kube-api-access-9bgtd") pod "03ce025e-318d-4abf-accf-2a7a35d7ec0b" (UID: "03ce025e-318d-4abf-accf-2a7a35d7ec0b"). InnerVolumeSpecName "kube-api-access-9bgtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.692632 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.774699 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bgtd\" (UniqueName: \"kubernetes.io/projected/03ce025e-318d-4abf-accf-2a7a35d7ec0b-kube-api-access-9bgtd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.790728 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:42 crc kubenswrapper[4833]: W0127 14:33:42.793793 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7d4e94_9da8_4042_a857_27194e95d788.slice/crio-e10c14f46970f47a0f59a6310d2080396f14784652302f7b22d561bcf7334b51 WatchSource:0}: Error finding container e10c14f46970f47a0f59a6310d2080396f14784652302f7b22d561bcf7334b51: Status 404 returned error can't find the container with id e10c14f46970f47a0f59a6310d2080396f14784652302f7b22d561bcf7334b51 Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.876027 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnl7f\" (UniqueName: \"kubernetes.io/projected/89b245df-2c38-4337-9a1b-41c31fc88e1c-kube-api-access-mnl7f\") pod \"89b245df-2c38-4337-9a1b-41c31fc88e1c\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.876103 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b245df-2c38-4337-9a1b-41c31fc88e1c-operator-scripts\") pod \"89b245df-2c38-4337-9a1b-41c31fc88e1c\" (UID: \"89b245df-2c38-4337-9a1b-41c31fc88e1c\") " Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.877724 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b245df-2c38-4337-9a1b-41c31fc88e1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89b245df-2c38-4337-9a1b-41c31fc88e1c" (UID: "89b245df-2c38-4337-9a1b-41c31fc88e1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.880976 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b245df-2c38-4337-9a1b-41c31fc88e1c-kube-api-access-mnl7f" (OuterVolumeSpecName: "kube-api-access-mnl7f") pod "89b245df-2c38-4337-9a1b-41c31fc88e1c" (UID: "89b245df-2c38-4337-9a1b-41c31fc88e1c"). InnerVolumeSpecName "kube-api-access-mnl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.978217 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnl7f\" (UniqueName: \"kubernetes.io/projected/89b245df-2c38-4337-9a1b-41c31fc88e1c-kube-api-access-mnl7f\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:42 crc kubenswrapper[4833]: I0127 14:33:42.978425 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b245df-2c38-4337-9a1b-41c31fc88e1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.051920 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.051921 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9458-account-create-update-2ndkv" event={"ID":"89b245df-2c38-4337-9a1b-41c31fc88e1c","Type":"ContainerDied","Data":"f121fde75e59ffb51f43d7bcd001d66d384ebc23aa043d4c978d053076bfbcd8"} Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.052065 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f121fde75e59ffb51f43d7bcd001d66d384ebc23aa043d4c978d053076bfbcd8" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.058864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerStarted","Data":"dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a"} Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.059016 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="sg-core" containerID="cri-o://597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee" gracePeriod=30 Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.058985 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-central-agent" containerID="cri-o://3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a" gracePeriod=30 Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.059166 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.059200 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-notification-agent" containerID="cri-o://34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba" gracePeriod=30 Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.059157 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="proxy-httpd" containerID="cri-o://dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a" gracePeriod=30 Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.063527 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-czh77" event={"ID":"03ce025e-318d-4abf-accf-2a7a35d7ec0b","Type":"ContainerDied","Data":"91445dff079c934b107445cabb58fc80c81f7384bb40c515bac307486cae3b7f"} Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.063548 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czh77" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.063566 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91445dff079c934b107445cabb58fc80c81f7384bb40c515bac307486cae3b7f" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.065354 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1995-account-create-update-7x427" event={"ID":"db269ee5-31fd-4d2a-83db-abe3047254fd","Type":"ContainerDied","Data":"49fc1ce656efb3bfea0c8815582626f71e08f7c1d001c54e524e494dcc475317"} Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.065377 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49fc1ce656efb3bfea0c8815582626f71e08f7c1d001c54e524e494dcc475317" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.065419 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1995-account-create-update-7x427" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.074279 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fa86b2e-2264-479f-861c-3d03d6e5edd4","Type":"ContainerStarted","Data":"6aa4551262f1fd487f90ae5ad4a689b579974b2ea1d4098898f5f9badc3f8fbc"} Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.076904 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab7d4e94-9da8-4042-a857-27194e95d788","Type":"ContainerStarted","Data":"e10c14f46970f47a0f59a6310d2080396f14784652302f7b22d561bcf7334b51"} Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.090656 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.716908417 podStartE2EDuration="9.09063637s" podCreationTimestamp="2026-01-27 14:33:34 +0000 UTC" firstStartedPulling="2026-01-27 14:33:35.336609856 +0000 UTC m=+1316.987934268" lastFinishedPulling="2026-01-27 14:33:41.710337819 +0000 UTC m=+1323.361662221" observedRunningTime="2026-01-27 14:33:43.080759749 +0000 UTC m=+1324.732084151" watchObservedRunningTime="2026-01-27 14:33:43.09063637 +0000 UTC m=+1324.741960772" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.239701 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" path="/var/lib/kubelet/pods/17c47588-5dcf-4028-b0f7-b650ab0d4f4e/volumes" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.240688 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c3b8c8-1c2e-435d-8380-1374792be064" path="/var/lib/kubelet/pods/d4c3b8c8-1c2e-435d-8380-1374792be064/volumes" Jan 27 14:33:43 crc kubenswrapper[4833]: I0127 14:33:43.425186 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="534b7701-45c8-403b-8601-3e22e9177c61" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.088399 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fa86b2e-2264-479f-861c-3d03d6e5edd4","Type":"ContainerStarted","Data":"476b90610f7c2e10ef619858c19894a219fb077fc5f136b827095edd7dd0786f"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.088791 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fa86b2e-2264-479f-861c-3d03d6e5edd4","Type":"ContainerStarted","Data":"dfbce009c26a1a2e789ed32f4f92993bbdc1b58cb57bdc2735d507ddf6876cef"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.091502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab7d4e94-9da8-4042-a857-27194e95d788","Type":"ContainerStarted","Data":"1af8575bd28e5b5e24180c298d02bc15bcf4c49164309f5cd62351a31adc1fdc"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.091540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab7d4e94-9da8-4042-a857-27194e95d788","Type":"ContainerStarted","Data":"e03ee85dfdaf7be9391c3a981fafd017fec564c991d0216c28c09391e1f4cead"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.094194 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerID="dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a" exitCode=0 Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.094223 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerID="597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee" exitCode=2 Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.094250 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerID="34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba" exitCode=0 Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.094259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerDied","Data":"dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.094291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerDied","Data":"597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.094300 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerDied","Data":"34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba"} Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.113232 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.113212561 podStartE2EDuration="3.113212561s" podCreationTimestamp="2026-01-27 14:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:44.110622333 +0000 UTC m=+1325.761946735" watchObservedRunningTime="2026-01-27 14:33:44.113212561 +0000 UTC m=+1325.764536963" Jan 27 14:33:44 crc kubenswrapper[4833]: I0127 14:33:44.134950 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.134924716 podStartE2EDuration="3.134924716s" podCreationTimestamp="2026-01-27 14:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:44.128665527 +0000 UTC m=+1325.779989929" watchObservedRunningTime="2026-01-27 14:33:44.134924716 +0000 UTC m=+1325.786249118" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.801644 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.910646 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.940675 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-scripts\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943468 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzwv4\" (UniqueName: \"kubernetes.io/projected/6d1b13fa-d037-4446-997b-beedc94e3e6c-kube-api-access-qzwv4\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-sg-core-conf-yaml\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943562 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943592 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-run-httpd\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943645 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-config-data\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943682 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-log-httpd\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.943927 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.944154 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.944515 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.944535 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d1b13fa-d037-4446-997b-beedc94e3e6c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.952007 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-scripts" (OuterVolumeSpecName: "scripts") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.957600 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1b13fa-d037-4446-997b-beedc94e3e6c-kube-api-access-qzwv4" (OuterVolumeSpecName: "kube-api-access-qzwv4") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "kube-api-access-qzwv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:45 crc kubenswrapper[4833]: I0127 14:33:45.976281 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.045037 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.045273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle\") pod \"6d1b13fa-d037-4446-997b-beedc94e3e6c\" (UID: \"6d1b13fa-d037-4446-997b-beedc94e3e6c\") " Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.045922 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.045951 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzwv4\" (UniqueName: \"kubernetes.io/projected/6d1b13fa-d037-4446-997b-beedc94e3e6c-kube-api-access-qzwv4\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.045964 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4833]: W0127 14:33:46.046328 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6d1b13fa-d037-4446-997b-beedc94e3e6c/volumes/kubernetes.io~secret/combined-ca-bundle Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.046347 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.082127 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-config-data" (OuterVolumeSpecName: "config-data") pod "6d1b13fa-d037-4446-997b-beedc94e3e6c" (UID: "6d1b13fa-d037-4446-997b-beedc94e3e6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.135497 4833 generic.go:334] "Generic (PLEG): container finished" podID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerID="3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a" exitCode=0 Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.135580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerDied","Data":"3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a"} Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.135605 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.135623 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d1b13fa-d037-4446-997b-beedc94e3e6c","Type":"ContainerDied","Data":"9d44ffa283fb17171e20ff8254de0a48eabf4e3e831c4eb98c5a9244d1279b90"} Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.135640 4833 scope.go:117] "RemoveContainer" containerID="dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.136022 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.148687 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.148746 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1b13fa-d037-4446-997b-beedc94e3e6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.161778 4833 scope.go:117] "RemoveContainer" containerID="597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.176019 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.179070 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.191305 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.194340 4833 scope.go:117] "RemoveContainer" containerID="34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.207642 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208145 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355b8f4b-7773-4472-a1a9-84ee61cba511" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208169 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="355b8f4b-7773-4472-a1a9-84ee61cba511" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208185 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="proxy-httpd" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208193 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="proxy-httpd" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208214 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca61e50-dbbc-4e99-ad42-9f769a410a6d" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208223 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca61e50-dbbc-4e99-ad42-9f769a410a6d" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208235 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208243 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208256 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db269ee5-31fd-4d2a-83db-abe3047254fd" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208263 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="db269ee5-31fd-4d2a-83db-abe3047254fd" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208283 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="sg-core" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208290 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="sg-core" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208308 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-notification-agent" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208317 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-notification-agent" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208332 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon-log" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208340 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon-log" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208351 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208358 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208371 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce025e-318d-4abf-accf-2a7a35d7ec0b" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208378 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce025e-318d-4abf-accf-2a7a35d7ec0b" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208389 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-central-agent" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208397 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-central-agent" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.208410 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b245df-2c38-4337-9a1b-41c31fc88e1c" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208417 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b245df-2c38-4337-9a1b-41c31fc88e1c" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208642 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b245df-2c38-4337-9a1b-41c31fc88e1c" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208667 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca61e50-dbbc-4e99-ad42-9f769a410a6d" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208682 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon-log" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208701 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c47588-5dcf-4028-b0f7-b650ab0d4f4e" containerName="horizon" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208712 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-notification-agent" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208724 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="355b8f4b-7773-4472-a1a9-84ee61cba511" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208764 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="sg-core" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208777 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce025e-318d-4abf-accf-2a7a35d7ec0b" containerName="mariadb-database-create" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208791 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="proxy-httpd" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208805 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" containerName="ceilometer-central-agent" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208817 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="db269ee5-31fd-4d2a-83db-abe3047254fd" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.208826 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" containerName="mariadb-account-create-update" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.211342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.216012 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.216165 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.216276 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.229640 4833 scope.go:117] "RemoveContainer" containerID="3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.240723 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.262818 4833 scope.go:117] "RemoveContainer" containerID="dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.263360 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a\": container with ID starting with dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a not found: ID does not exist" containerID="dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.263400 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a"} err="failed to get container status \"dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a\": rpc error: code = NotFound desc = could not find container \"dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a\": container with ID starting with dc15c7d435a3640eb9b6b0663559cb8f47092ed565296b41ca086a587da4204a not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.263427 4833 scope.go:117] "RemoveContainer" containerID="597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.263880 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee\": container with ID starting with 597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee not found: ID does not exist" containerID="597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.263916 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee"} err="failed to get container status \"597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee\": rpc error: code = NotFound desc = could not find container \"597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee\": container with ID starting with 597bcf8d136f94f024d400b5e372dc4a12e72904782b3655d739573e25e40fee not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.263943 4833 scope.go:117] "RemoveContainer" containerID="34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.265613 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba\": container with ID starting with 34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba not found: ID does not exist" containerID="34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.265645 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba"} err="failed to get container status \"34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba\": rpc error: code = NotFound desc = could not find container \"34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba\": container with ID starting with 34ccd50b4755b78c887dee1af6c7babf9cc1a635a000e6942adf0ea22ba242ba not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.265659 4833 scope.go:117] "RemoveContainer" containerID="3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a" Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.266683 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a\": container with ID starting with 3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a not found: ID does not exist" containerID="3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.266756 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a"} err="failed to get container status \"3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a\": rpc error: code = NotFound desc = could not find container \"3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a\": container with ID starting with 3d1cfb79c95a39513e44d24e83ff3e98b7b79fddeec0819e760d1d575b047f6a not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.347665 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:46 crc kubenswrapper[4833]: E0127 14:33:46.349539 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-k7pw4 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="792e2edb-9ab5-4528-b65a-afbdbca5fec5" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.352808 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-run-httpd\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.352915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.353103 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.353196 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7pw4\" (UniqueName: \"kubernetes.io/projected/792e2edb-9ab5-4528-b65a-afbdbca5fec5-kube-api-access-k7pw4\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.353506 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.353556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-log-httpd\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.353656 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-config-data\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.353813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-scripts\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455687 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455763 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7pw4\" (UniqueName: \"kubernetes.io/projected/792e2edb-9ab5-4528-b65a-afbdbca5fec5-kube-api-access-k7pw4\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455806 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455833 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-log-httpd\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-config-data\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-scripts\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.455997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-run-httpd\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.456019 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.456698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-run-httpd\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.456723 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-log-httpd\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.461020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.461426 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.461730 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-scripts\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.461748 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.464206 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-config-data\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:46 crc kubenswrapper[4833]: I0127 14:33:46.489064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7pw4\" (UniqueName: \"kubernetes.io/projected/792e2edb-9ab5-4528-b65a-afbdbca5fec5-kube-api-access-k7pw4\") pod \"ceilometer-0\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " pod="openstack/ceilometer-0" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.146553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.165113 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.222702 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1b13fa-d037-4446-997b-beedc94e3e6c" path="/var/lib/kubelet/pods/6d1b13fa-d037-4446-997b-beedc94e3e6c/volumes" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.267843 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-log-httpd\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.268163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-config-data\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.268391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-scripts\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.268222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.268715 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-combined-ca-bundle\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.268900 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-ceilometer-tls-certs\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.269015 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-run-httpd\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.269170 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7pw4\" (UniqueName: \"kubernetes.io/projected/792e2edb-9ab5-4528-b65a-afbdbca5fec5-kube-api-access-k7pw4\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.269348 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-sg-core-conf-yaml\") pod \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\" (UID: \"792e2edb-9ab5-4528-b65a-afbdbca5fec5\") " Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.269363 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.270623 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.270899 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792e2edb-9ab5-4528-b65a-afbdbca5fec5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.274335 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.274776 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.275215 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-config-data" (OuterVolumeSpecName: "config-data") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.275492 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.277930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792e2edb-9ab5-4528-b65a-afbdbca5fec5-kube-api-access-k7pw4" (OuterVolumeSpecName: "kube-api-access-k7pw4") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "kube-api-access-k7pw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.281390 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-scripts" (OuterVolumeSpecName: "scripts") pod "792e2edb-9ab5-4528-b65a-afbdbca5fec5" (UID: "792e2edb-9ab5-4528-b65a-afbdbca5fec5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.373136 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.373166 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.373175 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.373184 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.373192 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792e2edb-9ab5-4528-b65a-afbdbca5fec5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:47 crc kubenswrapper[4833]: I0127 14:33:47.373200 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7pw4\" (UniqueName: \"kubernetes.io/projected/792e2edb-9ab5-4528-b65a-afbdbca5fec5-kube-api-access-k7pw4\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.045511 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bnbkx"] Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.046989 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.049518 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hbljh" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.049662 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.050277 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.070161 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bnbkx"] Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.160794 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.188695 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-scripts\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.188813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f25n\" (UniqueName: \"kubernetes.io/projected/7b8eed79-9bd3-4b27-b185-10d0f449e158-kube-api-access-5f25n\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.188870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.188939 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-config-data\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.223578 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.235654 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.269915 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.272707 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.277072 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.277291 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.277437 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.280187 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.289790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-scripts\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.289847 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f25n\" (UniqueName: \"kubernetes.io/projected/7b8eed79-9bd3-4b27-b185-10d0f449e158-kube-api-access-5f25n\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.289891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.289957 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-config-data\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.295690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-scripts\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.305843 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.306230 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-config-data\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.311361 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f25n\" (UniqueName: \"kubernetes.io/projected/7b8eed79-9bd3-4b27-b185-10d0f449e158-kube-api-access-5f25n\") pod \"nova-cell0-conductor-db-sync-bnbkx\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.365985 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.391459 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.391704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.391801 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.391907 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.391990 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.392072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6n5b\" (UniqueName: \"kubernetes.io/projected/4a688c83-eee5-488c-acef-54eba46a5bf4-kube-api-access-t6n5b\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.392172 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-scripts\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.392274 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-config-data\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.498526 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.498906 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.498933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.498994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.499015 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-log-httpd\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.499032 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.500127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6n5b\" (UniqueName: \"kubernetes.io/projected/4a688c83-eee5-488c-acef-54eba46a5bf4-kube-api-access-t6n5b\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.500225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-scripts\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.500282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-config-data\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.500533 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-run-httpd\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.504400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.505318 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.507871 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.519729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-scripts\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.521165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-config-data\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.523317 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6n5b\" (UniqueName: \"kubernetes.io/projected/4a688c83-eee5-488c-acef-54eba46a5bf4-kube-api-access-t6n5b\") pod \"ceilometer-0\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.589583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:48 crc kubenswrapper[4833]: I0127 14:33:48.850714 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bnbkx"] Jan 27 14:33:49 crc kubenswrapper[4833]: I0127 14:33:49.046788 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:49 crc kubenswrapper[4833]: W0127 14:33:49.051428 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a688c83_eee5_488c_acef_54eba46a5bf4.slice/crio-a0a6cf875380f405afc297a80a416856e6a9e3698e5c7a2c6510305b6526a706 WatchSource:0}: Error finding container a0a6cf875380f405afc297a80a416856e6a9e3698e5c7a2c6510305b6526a706: Status 404 returned error can't find the container with id a0a6cf875380f405afc297a80a416856e6a9e3698e5c7a2c6510305b6526a706 Jan 27 14:33:49 crc kubenswrapper[4833]: I0127 14:33:49.170736 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerStarted","Data":"a0a6cf875380f405afc297a80a416856e6a9e3698e5c7a2c6510305b6526a706"} Jan 27 14:33:49 crc kubenswrapper[4833]: I0127 14:33:49.172226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" event={"ID":"7b8eed79-9bd3-4b27-b185-10d0f449e158","Type":"ContainerStarted","Data":"0df83fa94f20c4929be6f37fa114f4dd697972e4159c2b4087f9bed1bbdc43b0"} Jan 27 14:33:49 crc kubenswrapper[4833]: I0127 14:33:49.226954 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792e2edb-9ab5-4528-b65a-afbdbca5fec5" path="/var/lib/kubelet/pods/792e2edb-9ab5-4528-b65a-afbdbca5fec5/volumes" Jan 27 14:33:49 crc kubenswrapper[4833]: I0127 14:33:49.286868 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.200710 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerStarted","Data":"15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7"} Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.201216 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerStarted","Data":"7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5"} Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.496339 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.496399 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.536907 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.544903 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.917493 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.917777 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.948275 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:51 crc kubenswrapper[4833]: I0127 14:33:51.964739 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:52 crc kubenswrapper[4833]: I0127 14:33:52.177999 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:52 crc kubenswrapper[4833]: I0127 14:33:52.215645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerStarted","Data":"cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44"} Jan 27 14:33:52 crc kubenswrapper[4833]: I0127 14:33:52.215727 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:52 crc kubenswrapper[4833]: I0127 14:33:52.216187 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:52 crc kubenswrapper[4833]: I0127 14:33:52.216208 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:33:52 crc kubenswrapper[4833]: I0127 14:33:52.216222 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:33:54 crc kubenswrapper[4833]: I0127 14:33:54.201254 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:54 crc kubenswrapper[4833]: I0127 14:33:54.212569 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:33:54 crc kubenswrapper[4833]: I0127 14:33:54.258766 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:33:54 crc kubenswrapper[4833]: I0127 14:33:54.258771 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:33:54 crc kubenswrapper[4833]: I0127 14:33:54.378307 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:54 crc kubenswrapper[4833]: I0127 14:33:54.532577 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.308840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerStarted","Data":"5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e"} Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.308964 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-central-agent" containerID="cri-o://7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5" gracePeriod=30 Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.309329 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.309425 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="proxy-httpd" containerID="cri-o://5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e" gracePeriod=30 Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.309674 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="sg-core" containerID="cri-o://cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44" gracePeriod=30 Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.309725 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-notification-agent" containerID="cri-o://15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7" gracePeriod=30 Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.320769 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" event={"ID":"7b8eed79-9bd3-4b27-b185-10d0f449e158","Type":"ContainerStarted","Data":"284738e50d0903b413f5bb513dd88cafaffcc083a5c9f42d6a44b70c0bf5f752"} Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.344856 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.517280776 podStartE2EDuration="11.344838546s" podCreationTimestamp="2026-01-27 14:33:48 +0000 UTC" firstStartedPulling="2026-01-27 14:33:49.053582736 +0000 UTC m=+1330.704907148" lastFinishedPulling="2026-01-27 14:33:58.881140506 +0000 UTC m=+1340.532464918" observedRunningTime="2026-01-27 14:33:59.339373414 +0000 UTC m=+1340.990697816" watchObservedRunningTime="2026-01-27 14:33:59.344838546 +0000 UTC m=+1340.996162948" Jan 27 14:33:59 crc kubenswrapper[4833]: I0127 14:33:59.355346 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" podStartSLOduration=1.309087071 podStartE2EDuration="11.355325922s" podCreationTimestamp="2026-01-27 14:33:48 +0000 UTC" firstStartedPulling="2026-01-27 14:33:48.86106407 +0000 UTC m=+1330.512388482" lastFinishedPulling="2026-01-27 14:33:58.907302921 +0000 UTC m=+1340.558627333" observedRunningTime="2026-01-27 14:33:59.352852116 +0000 UTC m=+1341.004176518" watchObservedRunningTime="2026-01-27 14:33:59.355325922 +0000 UTC m=+1341.006650324" Jan 27 14:34:00 crc kubenswrapper[4833]: E0127 14:34:00.156362 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a688c83_eee5_488c_acef_54eba46a5bf4.slice/crio-conmon-15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:34:00 crc kubenswrapper[4833]: I0127 14:34:00.357893 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerID="cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44" exitCode=2 Jan 27 14:34:00 crc kubenswrapper[4833]: I0127 14:34:00.358177 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerID="15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7" exitCode=0 Jan 27 14:34:00 crc kubenswrapper[4833]: I0127 14:34:00.357974 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerDied","Data":"cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44"} Jan 27 14:34:00 crc kubenswrapper[4833]: I0127 14:34:00.358215 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerDied","Data":"15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7"} Jan 27 14:34:00 crc kubenswrapper[4833]: I0127 14:34:00.358227 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerDied","Data":"7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5"} Jan 27 14:34:00 crc kubenswrapper[4833]: I0127 14:34:00.358187 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerID="7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5" exitCode=0 Jan 27 14:34:02 crc kubenswrapper[4833]: I0127 14:34:02.261093 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:34:02 crc kubenswrapper[4833]: I0127 14:34:02.261461 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:34:09 crc kubenswrapper[4833]: I0127 14:34:09.463532 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b8eed79-9bd3-4b27-b185-10d0f449e158" containerID="284738e50d0903b413f5bb513dd88cafaffcc083a5c9f42d6a44b70c0bf5f752" exitCode=0 Jan 27 14:34:09 crc kubenswrapper[4833]: I0127 14:34:09.463627 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" event={"ID":"7b8eed79-9bd3-4b27-b185-10d0f449e158","Type":"ContainerDied","Data":"284738e50d0903b413f5bb513dd88cafaffcc083a5c9f42d6a44b70c0bf5f752"} Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.887826 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.972047 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-scripts\") pod \"7b8eed79-9bd3-4b27-b185-10d0f449e158\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.972305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f25n\" (UniqueName: \"kubernetes.io/projected/7b8eed79-9bd3-4b27-b185-10d0f449e158-kube-api-access-5f25n\") pod \"7b8eed79-9bd3-4b27-b185-10d0f449e158\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.972393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-config-data\") pod \"7b8eed79-9bd3-4b27-b185-10d0f449e158\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.973109 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-combined-ca-bundle\") pod \"7b8eed79-9bd3-4b27-b185-10d0f449e158\" (UID: \"7b8eed79-9bd3-4b27-b185-10d0f449e158\") " Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.982788 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-scripts" (OuterVolumeSpecName: "scripts") pod "7b8eed79-9bd3-4b27-b185-10d0f449e158" (UID: "7b8eed79-9bd3-4b27-b185-10d0f449e158"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.982822 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8eed79-9bd3-4b27-b185-10d0f449e158-kube-api-access-5f25n" (OuterVolumeSpecName: "kube-api-access-5f25n") pod "7b8eed79-9bd3-4b27-b185-10d0f449e158" (UID: "7b8eed79-9bd3-4b27-b185-10d0f449e158"). InnerVolumeSpecName "kube-api-access-5f25n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4833]: I0127 14:34:10.999593 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-config-data" (OuterVolumeSpecName: "config-data") pod "7b8eed79-9bd3-4b27-b185-10d0f449e158" (UID: "7b8eed79-9bd3-4b27-b185-10d0f449e158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.023333 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8eed79-9bd3-4b27-b185-10d0f449e158" (UID: "7b8eed79-9bd3-4b27-b185-10d0f449e158"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.076305 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f25n\" (UniqueName: \"kubernetes.io/projected/7b8eed79-9bd3-4b27-b185-10d0f449e158-kube-api-access-5f25n\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.076651 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.076800 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.076927 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eed79-9bd3-4b27-b185-10d0f449e158-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.490975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" event={"ID":"7b8eed79-9bd3-4b27-b185-10d0f449e158","Type":"ContainerDied","Data":"0df83fa94f20c4929be6f37fa114f4dd697972e4159c2b4087f9bed1bbdc43b0"} Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.491258 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df83fa94f20c4929be6f37fa114f4dd697972e4159c2b4087f9bed1bbdc43b0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.491072 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bnbkx" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.602421 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:34:11 crc kubenswrapper[4833]: E0127 14:34:11.602889 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8eed79-9bd3-4b27-b185-10d0f449e158" containerName="nova-cell0-conductor-db-sync" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.602908 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8eed79-9bd3-4b27-b185-10d0f449e158" containerName="nova-cell0-conductor-db-sync" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.603095 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8eed79-9bd3-4b27-b185-10d0f449e158" containerName="nova-cell0-conductor-db-sync" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.603709 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.606532 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.606581 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hbljh" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.619137 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.695377 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcf5035-0b64-4321-b0e7-67ec66e543b9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.695456 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrzc\" (UniqueName: \"kubernetes.io/projected/2bcf5035-0b64-4321-b0e7-67ec66e543b9-kube-api-access-slrzc\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.695579 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcf5035-0b64-4321-b0e7-67ec66e543b9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.797091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcf5035-0b64-4321-b0e7-67ec66e543b9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.797146 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrzc\" (UniqueName: \"kubernetes.io/projected/2bcf5035-0b64-4321-b0e7-67ec66e543b9-kube-api-access-slrzc\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.797227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcf5035-0b64-4321-b0e7-67ec66e543b9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.801536 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bcf5035-0b64-4321-b0e7-67ec66e543b9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.801880 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bcf5035-0b64-4321-b0e7-67ec66e543b9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.814362 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrzc\" (UniqueName: \"kubernetes.io/projected/2bcf5035-0b64-4321-b0e7-67ec66e543b9-kube-api-access-slrzc\") pod \"nova-cell0-conductor-0\" (UID: \"2bcf5035-0b64-4321-b0e7-67ec66e543b9\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:11 crc kubenswrapper[4833]: I0127 14:34:11.969169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:12 crc kubenswrapper[4833]: I0127 14:34:12.447206 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:34:12 crc kubenswrapper[4833]: I0127 14:34:12.506005 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2bcf5035-0b64-4321-b0e7-67ec66e543b9","Type":"ContainerStarted","Data":"167ab3515d453d7284932d7fc3fde1a23eb188ac3e1b4da33b8fe61088db9c89"} Jan 27 14:34:13 crc kubenswrapper[4833]: I0127 14:34:13.522832 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2bcf5035-0b64-4321-b0e7-67ec66e543b9","Type":"ContainerStarted","Data":"99b638036a554bfc2df9c82b76c2bf1574fff7b2b55b6ad4426c190b623f0159"} Jan 27 14:34:14 crc kubenswrapper[4833]: I0127 14:34:14.584832 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:18 crc kubenswrapper[4833]: I0127 14:34:18.603151 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.002776 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.025463 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=11.025432024 podStartE2EDuration="11.025432024s" podCreationTimestamp="2026-01-27 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:13.552487691 +0000 UTC m=+1355.203812133" watchObservedRunningTime="2026-01-27 14:34:22.025432024 +0000 UTC m=+1363.676756426" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.472423 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gspd9"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.474497 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.481053 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.481930 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.504079 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gspd9"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.575895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.575984 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9gz\" (UniqueName: \"kubernetes.io/projected/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-kube-api-access-zm9gz\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.576057 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-scripts\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.576102 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-config-data\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.658138 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.665167 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.671027 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.678583 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.678655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9gz\" (UniqueName: \"kubernetes.io/projected/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-kube-api-access-zm9gz\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.678719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-scripts\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.678764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-config-data\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.692514 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.696251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.697237 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-scripts\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.702961 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-config-data\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.730294 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9gz\" (UniqueName: \"kubernetes.io/projected/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-kube-api-access-zm9gz\") pod \"nova-cell0-cell-mapping-gspd9\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.780903 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b54d0a6-e718-4263-a374-390dcd0218fc-logs\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.781061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-config-data\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.781093 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgl92\" (UniqueName: \"kubernetes.io/projected/1b54d0a6-e718-4263-a374-390dcd0218fc-kube-api-access-tgl92\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.781129 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.803382 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.812255 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.813360 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.838571 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.849544 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.858800 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.863462 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882572 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882628 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b54d0a6-e718-4263-a374-390dcd0218fc-logs\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882706 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882733 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqp2m\" (UniqueName: \"kubernetes.io/projected/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-kube-api-access-jqp2m\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882757 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-config-data\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.882815 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgl92\" (UniqueName: \"kubernetes.io/projected/1b54d0a6-e718-4263-a374-390dcd0218fc-kube-api-access-tgl92\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.889699 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b54d0a6-e718-4263-a374-390dcd0218fc-logs\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.898594 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.903400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-config-data\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.927067 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgl92\" (UniqueName: \"kubernetes.io/projected/1b54d0a6-e718-4263-a374-390dcd0218fc-kube-api-access-tgl92\") pod \"nova-api-0\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " pod="openstack/nova-api-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.927326 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.962900 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.985255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.985882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-config-data\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.986350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2prp\" (UniqueName: \"kubernetes.io/projected/0025271f-163b-4a9b-9814-a74935040a09-kube-api-access-f2prp\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.986498 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0025271f-163b-4a9b-9814-a74935040a09-logs\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.986595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.986677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqp2m\" (UniqueName: \"kubernetes.io/projected/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-kube-api-access-jqp2m\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.986761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.987687 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.989414 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.990297 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.994283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.996062 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:34:22 crc kubenswrapper[4833]: I0127 14:34:22.998548 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-rn5kw"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.000625 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.008639 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.016368 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqp2m\" (UniqueName: \"kubernetes.io/projected/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-kube-api-access-jqp2m\") pod \"nova-cell1-novncproxy-0\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.039740 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-rn5kw"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.056401 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088182 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmc59\" (UniqueName: \"kubernetes.io/projected/13ed824c-8836-4646-9a33-8f65299b3201-kube-api-access-lmc59\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-config-data\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-config\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-config-data\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088337 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2prp\" (UniqueName: \"kubernetes.io/projected/0025271f-163b-4a9b-9814-a74935040a09-kube-api-access-f2prp\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088436 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0025271f-163b-4a9b-9814-a74935040a09-logs\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-svc\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088492 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnw58\" (UniqueName: \"kubernetes.io/projected/08cf0597-979f-4a77-aac8-bd04d43dc3b4-kube-api-access-gnw58\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088519 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.088567 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.091937 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.092895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0025271f-163b-4a9b-9814-a74935040a09-logs\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.102832 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-config-data\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.110784 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2prp\" (UniqueName: \"kubernetes.io/projected/0025271f-163b-4a9b-9814-a74935040a09-kube-api-access-f2prp\") pod \"nova-metadata-0\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.117173 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-svc\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnw58\" (UniqueName: \"kubernetes.io/projected/08cf0597-979f-4a77-aac8-bd04d43dc3b4-kube-api-access-gnw58\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190506 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190522 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmc59\" (UniqueName: \"kubernetes.io/projected/13ed824c-8836-4646-9a33-8f65299b3201-kube-api-access-lmc59\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190538 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-config-data\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190560 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-config\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.190639 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.191402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.191921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-svc\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.193618 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.195667 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-config\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.196128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.200877 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.201147 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-config-data\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.212901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnw58\" (UniqueName: \"kubernetes.io/projected/08cf0597-979f-4a77-aac8-bd04d43dc3b4-kube-api-access-gnw58\") pod \"nova-scheduler-0\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.217177 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmc59\" (UniqueName: \"kubernetes.io/projected/13ed824c-8836-4646-9a33-8f65299b3201-kube-api-access-lmc59\") pod \"dnsmasq-dns-865f5d856f-rn5kw\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.375703 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.400539 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.412345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.473031 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gspd9"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.535124 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-76xtr"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.536714 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.541617 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.541695 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.545188 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-76xtr"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.607724 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-scripts\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.607836 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gvh\" (UniqueName: \"kubernetes.io/projected/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-kube-api-access-f5gvh\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.607979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.608049 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-config-data\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.655541 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.677017 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.680267 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gspd9" event={"ID":"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d","Type":"ContainerStarted","Data":"bccc6cc724f2ec47f2f3538faf47e01aabf44c21e4735fa62b4e13e763274dea"} Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.710114 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.710183 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-config-data\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.710239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-scripts\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.710301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gvh\" (UniqueName: \"kubernetes.io/projected/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-kube-api-access-f5gvh\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.714188 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.716070 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-scripts\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.716653 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-config-data\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.736521 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gvh\" (UniqueName: \"kubernetes.io/projected/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-kube-api-access-f5gvh\") pod \"nova-cell1-conductor-db-sync-76xtr\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.798227 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:23 crc kubenswrapper[4833]: W0127 14:34:23.801738 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b54d0a6_e718_4263_a374_390dcd0218fc.slice/crio-1331d82d7255f66a337bc2ea3c28ea2d27f9ea4862914e431f715b4cd4e1f7fc WatchSource:0}: Error finding container 1331d82d7255f66a337bc2ea3c28ea2d27f9ea4862914e431f715b4cd4e1f7fc: Status 404 returned error can't find the container with id 1331d82d7255f66a337bc2ea3c28ea2d27f9ea4862914e431f715b4cd4e1f7fc Jan 27 14:34:23 crc kubenswrapper[4833]: I0127 14:34:23.933170 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.047165 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:24 crc kubenswrapper[4833]: W0127 14:34:24.078494 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0025271f_163b_4a9b_9814_a74935040a09.slice/crio-5a31423d7cefffa1d6073bd72c4ac4f69ca7e6914ef74e23ee2b30254e7f3de8 WatchSource:0}: Error finding container 5a31423d7cefffa1d6073bd72c4ac4f69ca7e6914ef74e23ee2b30254e7f3de8: Status 404 returned error can't find the container with id 5a31423d7cefffa1d6073bd72c4ac4f69ca7e6914ef74e23ee2b30254e7f3de8 Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.088827 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:24 crc kubenswrapper[4833]: W0127 14:34:24.099683 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08cf0597_979f_4a77_aac8_bd04d43dc3b4.slice/crio-be21dfc42d89bef65a55b74a3a4551a6077864dce30482ac616e1a8b8fd35f0b WatchSource:0}: Error finding container be21dfc42d89bef65a55b74a3a4551a6077864dce30482ac616e1a8b8fd35f0b: Status 404 returned error can't find the container with id be21dfc42d89bef65a55b74a3a4551a6077864dce30482ac616e1a8b8fd35f0b Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.189150 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-rn5kw"] Jan 27 14:34:24 crc kubenswrapper[4833]: W0127 14:34:24.197541 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13ed824c_8836_4646_9a33_8f65299b3201.slice/crio-71409c93ef06e964402f2955c2a92197eb56aa22acb106425d54b215889bbb35 WatchSource:0}: Error finding container 71409c93ef06e964402f2955c2a92197eb56aa22acb106425d54b215889bbb35: Status 404 returned error can't find the container with id 71409c93ef06e964402f2955c2a92197eb56aa22acb106425d54b215889bbb35 Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.446390 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-76xtr"] Jan 27 14:34:24 crc kubenswrapper[4833]: W0127 14:34:24.477160 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de97b8b_2681_4c26_83d3_63e5ad9eee7b.slice/crio-d59b6ac7b8517e670060129998d3ad6f713e4ed09a04ddb0d012a01956680f88 WatchSource:0}: Error finding container d59b6ac7b8517e670060129998d3ad6f713e4ed09a04ddb0d012a01956680f88: Status 404 returned error can't find the container with id d59b6ac7b8517e670060129998d3ad6f713e4ed09a04ddb0d012a01956680f88 Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.722211 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-76xtr" event={"ID":"8de97b8b-2681-4c26-83d3-63e5ad9eee7b","Type":"ContainerStarted","Data":"d59b6ac7b8517e670060129998d3ad6f713e4ed09a04ddb0d012a01956680f88"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.732829 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0025271f-163b-4a9b-9814-a74935040a09","Type":"ContainerStarted","Data":"5a31423d7cefffa1d6073bd72c4ac4f69ca7e6914ef74e23ee2b30254e7f3de8"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.741566 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08cf0597-979f-4a77-aac8-bd04d43dc3b4","Type":"ContainerStarted","Data":"be21dfc42d89bef65a55b74a3a4551a6077864dce30482ac616e1a8b8fd35f0b"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.745318 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b54d0a6-e718-4263-a374-390dcd0218fc","Type":"ContainerStarted","Data":"1331d82d7255f66a337bc2ea3c28ea2d27f9ea4862914e431f715b4cd4e1f7fc"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.747041 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gspd9" event={"ID":"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d","Type":"ContainerStarted","Data":"70a76e7c890080c0b4c055b53088525545528bcdb73af41a24f3059c18fc0092"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.751161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4","Type":"ContainerStarted","Data":"111e12feeadbee3934fa23d28b0655277fcbebe9957d2ca0cdbfe206802f0565"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.755367 4833 generic.go:334] "Generic (PLEG): container finished" podID="13ed824c-8836-4646-9a33-8f65299b3201" containerID="fc01118492213c26731532d82e5cbedc26e286f21a0f2f08c8aa961011c92e3a" exitCode=0 Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.755428 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" event={"ID":"13ed824c-8836-4646-9a33-8f65299b3201","Type":"ContainerDied","Data":"fc01118492213c26731532d82e5cbedc26e286f21a0f2f08c8aa961011c92e3a"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.755477 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" event={"ID":"13ed824c-8836-4646-9a33-8f65299b3201","Type":"ContainerStarted","Data":"71409c93ef06e964402f2955c2a92197eb56aa22acb106425d54b215889bbb35"} Jan 27 14:34:24 crc kubenswrapper[4833]: I0127 14:34:24.768271 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gspd9" podStartSLOduration=2.768247309 podStartE2EDuration="2.768247309s" podCreationTimestamp="2026-01-27 14:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:24.764909205 +0000 UTC m=+1366.416233607" watchObservedRunningTime="2026-01-27 14:34:24.768247309 +0000 UTC m=+1366.419571711" Jan 27 14:34:25 crc kubenswrapper[4833]: I0127 14:34:25.765362 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-76xtr" event={"ID":"8de97b8b-2681-4c26-83d3-63e5ad9eee7b","Type":"ContainerStarted","Data":"82a4158d2509f18dda68bdb8dc457083323a5e1849a68915f75d0af07001d40b"} Jan 27 14:34:25 crc kubenswrapper[4833]: I0127 14:34:25.786627 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-76xtr" podStartSLOduration=2.786608944 podStartE2EDuration="2.786608944s" podCreationTimestamp="2026-01-27 14:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:25.78602738 +0000 UTC m=+1367.437351782" watchObservedRunningTime="2026-01-27 14:34:25.786608944 +0000 UTC m=+1367.437933346" Jan 27 14:34:26 crc kubenswrapper[4833]: I0127 14:34:26.058265 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:26 crc kubenswrapper[4833]: I0127 14:34:26.069736 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:34:32 crc kubenswrapper[4833]: I0127 14:34:32.260693 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:34:32 crc kubenswrapper[4833]: I0127 14:34:32.261374 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.116310 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.116864 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-metadata-log,Image:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /bin/sh -c /usr/bin/tail -n+1 -F /var/log/nova/nova-metadata.log 2>/dev/null],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65ch9bh68dh678h588h588h77h57fh54ch5c9h658h8h96hc9h65ch544h584h68fh58ch597h677h5b8h84hfdh9bh64dh85h68fhdch686hd7h66cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/nova,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2prp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8775 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8775 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8775 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-metadata-0_openstack(0025271f-163b-4a9b-9814-a74935040a09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.149741 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"nova-metadata-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"nova-metadata-metadata\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-api:current-podified\\\"\"]" pod="openstack/nova-metadata-0" podUID="0025271f-163b-4a9b-9814-a74935040a09" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.170021 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-api:current-podified" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.170207 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-api-log,Image:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /bin/sh -c /usr/bin/tail -n+1 -F /var/log/nova/nova-api.log 2>/dev/null],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd7h97h59hf4hf8h5dfh5b8h5bhd5h545h586h656hfch95h65bh5c6h64h548h66dh64h64bh5f4h56fh698h587h5cch587h67fh5h5bh558h64fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/nova,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgl92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8774 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8774 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8774 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-api-0_openstack(1b54d0a6-e718-4263-a374-390dcd0218fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.176924 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"nova-api-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"nova-api-api\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-api:current-podified\\\"\"]" pod="openstack/nova-api-0" podUID="1b54d0a6-e718-4263-a374-390dcd0218fc" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.578043 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.674707 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-sg-core-conf-yaml\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.674853 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-ceilometer-tls-certs\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.675510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-combined-ca-bundle\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.675548 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-config-data\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.675579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-run-httpd\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.675729 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-scripts\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.675776 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6n5b\" (UniqueName: \"kubernetes.io/projected/4a688c83-eee5-488c-acef-54eba46a5bf4-kube-api-access-t6n5b\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.675855 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-log-httpd\") pod \"4a688c83-eee5-488c-acef-54eba46a5bf4\" (UID: \"4a688c83-eee5-488c-acef-54eba46a5bf4\") " Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.676840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.684587 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-scripts" (OuterVolumeSpecName: "scripts") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.684816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.684885 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a688c83-eee5-488c-acef-54eba46a5bf4-kube-api-access-t6n5b" (OuterVolumeSpecName: "kube-api-access-t6n5b") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "kube-api-access-t6n5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.710393 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.746306 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.755758 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779120 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779151 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779197 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779206 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779215 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6n5b\" (UniqueName: \"kubernetes.io/projected/4a688c83-eee5-488c-acef-54eba46a5bf4-kube-api-access-t6n5b\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779225 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4a688c83-eee5-488c-acef-54eba46a5bf4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.779232 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.800367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-config-data" (OuterVolumeSpecName: "config-data") pod "4a688c83-eee5-488c-acef-54eba46a5bf4" (UID: "4a688c83-eee5-488c-acef-54eba46a5bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.881166 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a688c83-eee5-488c-acef-54eba46a5bf4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.894010 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4","Type":"ContainerStarted","Data":"48a656f233b313983d5c9da49c3e668d9a2a075f62eef4e001722264d6055704"} Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.894171 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://48a656f233b313983d5c9da49c3e668d9a2a075f62eef4e001722264d6055704" gracePeriod=30 Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.896753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" event={"ID":"13ed824c-8836-4646-9a33-8f65299b3201","Type":"ContainerStarted","Data":"8052d4bee7bedc4be252ed2b5835fb964eb506bcbb96ecf0ea86b480c41c0c11"} Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.897197 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.901863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08cf0597-979f-4a77-aac8-bd04d43dc3b4","Type":"ContainerStarted","Data":"73854d08a8f743b857ba0798ce05af2e9782f8e0163fc7c7ff2b80303e90b16f"} Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.909647 4833 generic.go:334] "Generic (PLEG): container finished" podID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerID="5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e" exitCode=137 Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.910498 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.913371 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerDied","Data":"5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e"} Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.913476 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4a688c83-eee5-488c-acef-54eba46a5bf4","Type":"ContainerDied","Data":"a0a6cf875380f405afc297a80a416856e6a9e3698e5c7a2c6510305b6526a706"} Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.913512 4833 scope.go:117] "RemoveContainer" containerID="5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e" Jan 27 14:34:35 crc kubenswrapper[4833]: E0127 14:34:35.916950 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"nova-api-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-api:current-podified\\\"\", failed to \"StartContainer\" for \"nova-api-api\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-api:current-podified\\\"\"]" pod="openstack/nova-api-0" podUID="1b54d0a6-e718-4263-a374-390dcd0218fc" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.929100 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.408375058 podStartE2EDuration="13.929050178s" podCreationTimestamp="2026-01-27 14:34:22 +0000 UTC" firstStartedPulling="2026-01-27 14:34:23.676745587 +0000 UTC m=+1365.328069989" lastFinishedPulling="2026-01-27 14:34:35.197420707 +0000 UTC m=+1376.848745109" observedRunningTime="2026-01-27 14:34:35.918867264 +0000 UTC m=+1377.570191686" watchObservedRunningTime="2026-01-27 14:34:35.929050178 +0000 UTC m=+1377.580374580" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.953696 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.858460689 podStartE2EDuration="13.953681508s" podCreationTimestamp="2026-01-27 14:34:22 +0000 UTC" firstStartedPulling="2026-01-27 14:34:24.104717959 +0000 UTC m=+1365.756042361" lastFinishedPulling="2026-01-27 14:34:35.199938778 +0000 UTC m=+1376.851263180" observedRunningTime="2026-01-27 14:34:35.949650641 +0000 UTC m=+1377.600975063" watchObservedRunningTime="2026-01-27 14:34:35.953681508 +0000 UTC m=+1377.605005910" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.958921 4833 scope.go:117] "RemoveContainer" containerID="cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44" Jan 27 14:34:35 crc kubenswrapper[4833]: I0127 14:34:35.997919 4833 scope.go:117] "RemoveContainer" containerID="15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.048284 4833 scope.go:117] "RemoveContainer" containerID="7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.076039 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" podStartSLOduration=14.076017745 podStartE2EDuration="14.076017745s" podCreationTimestamp="2026-01-27 14:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:36.054936431 +0000 UTC m=+1377.706260853" watchObservedRunningTime="2026-01-27 14:34:36.076017745 +0000 UTC m=+1377.727342147" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.081217 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.104841 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.114394 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.114938 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="sg-core" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115008 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="sg-core" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.115030 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="proxy-httpd" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115039 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="proxy-httpd" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.115059 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-notification-agent" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115070 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-notification-agent" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.115092 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-central-agent" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115099 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-central-agent" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115313 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-central-agent" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115326 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="proxy-httpd" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115343 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="sg-core" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.115362 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" containerName="ceilometer-notification-agent" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.117619 4833 scope.go:117] "RemoveContainer" containerID="5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.118611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.121325 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.121424 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.122565 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.131520 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e\": container with ID starting with 5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e not found: ID does not exist" containerID="5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.131557 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e"} err="failed to get container status \"5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e\": rpc error: code = NotFound desc = could not find container \"5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e\": container with ID starting with 5b3b3add4aef99ff651e91476e76453bf5c9f1728b9b0bbacd1fe1d96a13dc9e not found: ID does not exist" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.131579 4833 scope.go:117] "RemoveContainer" containerID="cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.131981 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44\": container with ID starting with cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44 not found: ID does not exist" containerID="cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.132003 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44"} err="failed to get container status \"cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44\": rpc error: code = NotFound desc = could not find container \"cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44\": container with ID starting with cb7b7d586d55ec44f4158c09e735afe95c7ff6c0162e28745e4127b9d2d9ee44 not found: ID does not exist" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.132017 4833 scope.go:117] "RemoveContainer" containerID="15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.132919 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7\": container with ID starting with 15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7 not found: ID does not exist" containerID="15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.132951 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7"} err="failed to get container status \"15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7\": rpc error: code = NotFound desc = could not find container \"15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7\": container with ID starting with 15cb788d37776254cbf8028148de112ce4afa79449838c33a08eebf9ed2c26b7 not found: ID does not exist" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.132982 4833 scope.go:117] "RemoveContainer" containerID="7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5" Jan 27 14:34:36 crc kubenswrapper[4833]: E0127 14:34:36.133505 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5\": container with ID starting with 7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5 not found: ID does not exist" containerID="7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.133551 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5"} err="failed to get container status \"7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5\": rpc error: code = NotFound desc = could not find container \"7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5\": container with ID starting with 7c227abbbbc82192c99d69774bf011859cb26e2fbeace05abfca74e0b25245b5 not found: ID does not exist" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.135306 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.191849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-config-data\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.191975 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-log-httpd\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.192035 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.192091 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92n6s\" (UniqueName: \"kubernetes.io/projected/b2a045db-b852-47be-bc8d-c59a882a791d-kube-api-access-92n6s\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.192162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.192186 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.192242 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-scripts\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.192298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-run-httpd\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294045 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294128 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92n6s\" (UniqueName: \"kubernetes.io/projected/b2a045db-b852-47be-bc8d-c59a882a791d-kube-api-access-92n6s\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294175 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294188 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294221 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-scripts\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-run-httpd\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294311 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-config-data\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-log-httpd\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-log-httpd\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.294935 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-run-httpd\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.300479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-scripts\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.300584 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.303020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.304750 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-config-data\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.304916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.316423 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92n6s\" (UniqueName: \"kubernetes.io/projected/b2a045db-b852-47be-bc8d-c59a882a791d-kube-api-access-92n6s\") pod \"ceilometer-0\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.403515 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.497686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2prp\" (UniqueName: \"kubernetes.io/projected/0025271f-163b-4a9b-9814-a74935040a09-kube-api-access-f2prp\") pod \"0025271f-163b-4a9b-9814-a74935040a09\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.497826 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-config-data\") pod \"0025271f-163b-4a9b-9814-a74935040a09\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.497874 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0025271f-163b-4a9b-9814-a74935040a09-logs\") pod \"0025271f-163b-4a9b-9814-a74935040a09\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.498086 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-combined-ca-bundle\") pod \"0025271f-163b-4a9b-9814-a74935040a09\" (UID: \"0025271f-163b-4a9b-9814-a74935040a09\") " Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.498398 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0025271f-163b-4a9b-9814-a74935040a09-logs" (OuterVolumeSpecName: "logs") pod "0025271f-163b-4a9b-9814-a74935040a09" (UID: "0025271f-163b-4a9b-9814-a74935040a09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.498942 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0025271f-163b-4a9b-9814-a74935040a09-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.502037 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0025271f-163b-4a9b-9814-a74935040a09-kube-api-access-f2prp" (OuterVolumeSpecName: "kube-api-access-f2prp") pod "0025271f-163b-4a9b-9814-a74935040a09" (UID: "0025271f-163b-4a9b-9814-a74935040a09"). InnerVolumeSpecName "kube-api-access-f2prp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.502385 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0025271f-163b-4a9b-9814-a74935040a09" (UID: "0025271f-163b-4a9b-9814-a74935040a09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.502668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-config-data" (OuterVolumeSpecName: "config-data") pod "0025271f-163b-4a9b-9814-a74935040a09" (UID: "0025271f-163b-4a9b-9814-a74935040a09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.514209 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.605857 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.607863 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2prp\" (UniqueName: \"kubernetes.io/projected/0025271f-163b-4a9b-9814-a74935040a09-kube-api-access-f2prp\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.607906 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0025271f-163b-4a9b-9814-a74935040a09-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.925198 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0025271f-163b-4a9b-9814-a74935040a09","Type":"ContainerDied","Data":"5a31423d7cefffa1d6073bd72c4ac4f69ca7e6914ef74e23ee2b30254e7f3de8"} Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.925340 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:36 crc kubenswrapper[4833]: I0127 14:34:36.970425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.044492 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.062874 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.074145 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.077357 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.082854 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.083650 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.089697 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.227005 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0025271f-163b-4a9b-9814-a74935040a09" path="/var/lib/kubelet/pods/0025271f-163b-4a9b-9814-a74935040a09/volumes" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.227528 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a688c83-eee5-488c-acef-54eba46a5bf4" path="/var/lib/kubelet/pods/4a688c83-eee5-488c-acef-54eba46a5bf4/volumes" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.244759 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppqq\" (UniqueName: \"kubernetes.io/projected/c9a287cb-2074-4976-b101-cf2389212a0d-kube-api-access-zppqq\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.244841 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-config-data\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.244891 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.244929 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.245029 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a287cb-2074-4976-b101-cf2389212a0d-logs\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.346684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a287cb-2074-4976-b101-cf2389212a0d-logs\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.346776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppqq\" (UniqueName: \"kubernetes.io/projected/c9a287cb-2074-4976-b101-cf2389212a0d-kube-api-access-zppqq\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.346830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-config-data\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.346878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.346927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.348019 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a287cb-2074-4976-b101-cf2389212a0d-logs\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.352324 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.353007 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-config-data\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.356923 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.366961 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppqq\" (UniqueName: \"kubernetes.io/projected/c9a287cb-2074-4976-b101-cf2389212a0d-kube-api-access-zppqq\") pod \"nova-metadata-0\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.396125 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.863785 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:37 crc kubenswrapper[4833]: W0127 14:34:37.865666 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a287cb_2074_4976_b101_cf2389212a0d.slice/crio-289114868ca9bf135cbe35d882908bc7a7720d8b73629b3c8809d0bf2b1e3dc8 WatchSource:0}: Error finding container 289114868ca9bf135cbe35d882908bc7a7720d8b73629b3c8809d0bf2b1e3dc8: Status 404 returned error can't find the container with id 289114868ca9bf135cbe35d882908bc7a7720d8b73629b3c8809d0bf2b1e3dc8 Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.937172 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9a287cb-2074-4976-b101-cf2389212a0d","Type":"ContainerStarted","Data":"289114868ca9bf135cbe35d882908bc7a7720d8b73629b3c8809d0bf2b1e3dc8"} Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.940458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerStarted","Data":"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb"} Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.940512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerStarted","Data":"6745810b406f7d1c531e113a8a2a55823cfaafb390da11ac7191c12112b3ff63"} Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.944591 4833 generic.go:334] "Generic (PLEG): container finished" podID="14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" containerID="70a76e7c890080c0b4c055b53088525545528bcdb73af41a24f3059c18fc0092" exitCode=0 Jan 27 14:34:37 crc kubenswrapper[4833]: I0127 14:34:37.944633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gspd9" event={"ID":"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d","Type":"ContainerDied","Data":"70a76e7c890080c0b4c055b53088525545528bcdb73af41a24f3059c18fc0092"} Jan 27 14:34:38 crc kubenswrapper[4833]: I0127 14:34:38.057912 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:34:38 crc kubenswrapper[4833]: I0127 14:34:38.401531 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:34:38 crc kubenswrapper[4833]: I0127 14:34:38.957837 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerStarted","Data":"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a"} Jan 27 14:34:38 crc kubenswrapper[4833]: I0127 14:34:38.973590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9a287cb-2074-4976-b101-cf2389212a0d","Type":"ContainerStarted","Data":"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a"} Jan 27 14:34:38 crc kubenswrapper[4833]: I0127 14:34:38.973661 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9a287cb-2074-4976-b101-cf2389212a0d","Type":"ContainerStarted","Data":"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b"} Jan 27 14:34:38 crc kubenswrapper[4833]: I0127 14:34:38.993885 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.499262274 podStartE2EDuration="1.993863682s" podCreationTimestamp="2026-01-27 14:34:37 +0000 UTC" firstStartedPulling="2026-01-27 14:34:37.867898213 +0000 UTC m=+1379.519222615" lastFinishedPulling="2026-01-27 14:34:38.362499631 +0000 UTC m=+1380.013824023" observedRunningTime="2026-01-27 14:34:38.99125805 +0000 UTC m=+1380.642582472" watchObservedRunningTime="2026-01-27 14:34:38.993863682 +0000 UTC m=+1380.645188084" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.366410 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.490979 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-combined-ca-bundle\") pod \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.491121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-config-data\") pod \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.491185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9gz\" (UniqueName: \"kubernetes.io/projected/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-kube-api-access-zm9gz\") pod \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.491847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-scripts\") pod \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\" (UID: \"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d\") " Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.506800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-kube-api-access-zm9gz" (OuterVolumeSpecName: "kube-api-access-zm9gz") pod "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" (UID: "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d"). InnerVolumeSpecName "kube-api-access-zm9gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.510494 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-scripts" (OuterVolumeSpecName: "scripts") pod "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" (UID: "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.522094 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" (UID: "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.539173 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-config-data" (OuterVolumeSpecName: "config-data") pod "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" (UID: "14aee9b7-55c7-4bcd-b79c-e39e491c4d5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.594653 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.594694 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.594708 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9gz\" (UniqueName: \"kubernetes.io/projected/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-kube-api-access-zm9gz\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.594721 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.987789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerStarted","Data":"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613"} Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.989568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gspd9" event={"ID":"14aee9b7-55c7-4bcd-b79c-e39e491c4d5d","Type":"ContainerDied","Data":"bccc6cc724f2ec47f2f3538faf47e01aabf44c21e4735fa62b4e13e763274dea"} Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.989605 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bccc6cc724f2ec47f2f3538faf47e01aabf44c21e4735fa62b4e13e763274dea" Jan 27 14:34:39 crc kubenswrapper[4833]: I0127 14:34:39.989712 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gspd9" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.155537 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.164092 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.164658 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="08cf0597-979f-4a77-aac8-bd04d43dc3b4" containerName="nova-scheduler-scheduler" containerID="cri-o://73854d08a8f743b857ba0798ce05af2e9782f8e0163fc7c7ff2b80303e90b16f" gracePeriod=30 Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.201485 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.622761 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.716416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-combined-ca-bundle\") pod \"1b54d0a6-e718-4263-a374-390dcd0218fc\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.716499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b54d0a6-e718-4263-a374-390dcd0218fc-logs\") pod \"1b54d0a6-e718-4263-a374-390dcd0218fc\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.716713 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgl92\" (UniqueName: \"kubernetes.io/projected/1b54d0a6-e718-4263-a374-390dcd0218fc-kube-api-access-tgl92\") pod \"1b54d0a6-e718-4263-a374-390dcd0218fc\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.716775 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b54d0a6-e718-4263-a374-390dcd0218fc-logs" (OuterVolumeSpecName: "logs") pod "1b54d0a6-e718-4263-a374-390dcd0218fc" (UID: "1b54d0a6-e718-4263-a374-390dcd0218fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.716910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-config-data\") pod \"1b54d0a6-e718-4263-a374-390dcd0218fc\" (UID: \"1b54d0a6-e718-4263-a374-390dcd0218fc\") " Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.717659 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b54d0a6-e718-4263-a374-390dcd0218fc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.722041 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b54d0a6-e718-4263-a374-390dcd0218fc" (UID: "1b54d0a6-e718-4263-a374-390dcd0218fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.724520 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b54d0a6-e718-4263-a374-390dcd0218fc-kube-api-access-tgl92" (OuterVolumeSpecName: "kube-api-access-tgl92") pod "1b54d0a6-e718-4263-a374-390dcd0218fc" (UID: "1b54d0a6-e718-4263-a374-390dcd0218fc"). InnerVolumeSpecName "kube-api-access-tgl92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.725153 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-config-data" (OuterVolumeSpecName: "config-data") pod "1b54d0a6-e718-4263-a374-390dcd0218fc" (UID: "1b54d0a6-e718-4263-a374-390dcd0218fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.818974 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.819016 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b54d0a6-e718-4263-a374-390dcd0218fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:40 crc kubenswrapper[4833]: I0127 14:34:40.819027 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgl92\" (UniqueName: \"kubernetes.io/projected/1b54d0a6-e718-4263-a374-390dcd0218fc-kube-api-access-tgl92\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.004094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b54d0a6-e718-4263-a374-390dcd0218fc","Type":"ContainerDied","Data":"1331d82d7255f66a337bc2ea3c28ea2d27f9ea4862914e431f715b4cd4e1f7fc"} Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.004114 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.006391 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerStarted","Data":"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b"} Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.006656 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.008868 4833 generic.go:334] "Generic (PLEG): container finished" podID="08cf0597-979f-4a77-aac8-bd04d43dc3b4" containerID="73854d08a8f743b857ba0798ce05af2e9782f8e0163fc7c7ff2b80303e90b16f" exitCode=0 Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.009033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08cf0597-979f-4a77-aac8-bd04d43dc3b4","Type":"ContainerDied","Data":"73854d08a8f743b857ba0798ce05af2e9782f8e0163fc7c7ff2b80303e90b16f"} Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.009083 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-log" containerID="cri-o://1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b" gracePeriod=30 Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.009789 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-metadata" containerID="cri-o://cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a" gracePeriod=30 Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.029844 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.598938376 podStartE2EDuration="5.029821232s" podCreationTimestamp="2026-01-27 14:34:36 +0000 UTC" firstStartedPulling="2026-01-27 14:34:36.973817834 +0000 UTC m=+1378.625142236" lastFinishedPulling="2026-01-27 14:34:40.40470069 +0000 UTC m=+1382.056025092" observedRunningTime="2026-01-27 14:34:41.026041402 +0000 UTC m=+1382.677365814" watchObservedRunningTime="2026-01-27 14:34:41.029821232 +0000 UTC m=+1382.681145634" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.082435 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.098570 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.138850 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:41 crc kubenswrapper[4833]: E0127 14:34:41.139307 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" containerName="nova-manage" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.139328 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" containerName="nova-manage" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.139584 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" containerName="nova-manage" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.145145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.152087 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.152669 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.227017 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b54d0a6-e718-4263-a374-390dcd0218fc" path="/var/lib/kubelet/pods/1b54d0a6-e718-4263-a374-390dcd0218fc/volumes" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.228384 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-logs\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.228742 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-config-data\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.228912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.229008 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfplv\" (UniqueName: \"kubernetes.io/projected/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-kube-api-access-kfplv\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: E0127 14:34:41.241853 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b54d0a6_e718_4263_a374_390dcd0218fc.slice/crio-1331d82d7255f66a337bc2ea3c28ea2d27f9ea4862914e431f715b4cd4e1f7fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a287cb_2074_4976_b101_cf2389212a0d.slice/crio-1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b54d0a6_e718_4263_a374_390dcd0218fc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a287cb_2074_4976_b101_cf2389212a0d.slice/crio-conmon-1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a287cb_2074_4976_b101_cf2389212a0d.slice/crio-conmon-cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.330862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-logs\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.331003 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-config-data\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.331081 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.331102 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfplv\" (UniqueName: \"kubernetes.io/projected/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-kube-api-access-kfplv\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.332879 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-logs\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.336583 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.337380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-config-data\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.353384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfplv\" (UniqueName: \"kubernetes.io/projected/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-kube-api-access-kfplv\") pod \"nova-api-0\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.403341 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.474972 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.548159 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-config-data\") pod \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.548338 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-combined-ca-bundle\") pod \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.548454 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnw58\" (UniqueName: \"kubernetes.io/projected/08cf0597-979f-4a77-aac8-bd04d43dc3b4-kube-api-access-gnw58\") pod \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\" (UID: \"08cf0597-979f-4a77-aac8-bd04d43dc3b4\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.553416 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cf0597-979f-4a77-aac8-bd04d43dc3b4-kube-api-access-gnw58" (OuterVolumeSpecName: "kube-api-access-gnw58") pod "08cf0597-979f-4a77-aac8-bd04d43dc3b4" (UID: "08cf0597-979f-4a77-aac8-bd04d43dc3b4"). InnerVolumeSpecName "kube-api-access-gnw58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.634285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08cf0597-979f-4a77-aac8-bd04d43dc3b4" (UID: "08cf0597-979f-4a77-aac8-bd04d43dc3b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.651890 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.651931 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnw58\" (UniqueName: \"kubernetes.io/projected/08cf0597-979f-4a77-aac8-bd04d43dc3b4-kube-api-access-gnw58\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.660893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-config-data" (OuterVolumeSpecName: "config-data") pod "08cf0597-979f-4a77-aac8-bd04d43dc3b4" (UID: "08cf0597-979f-4a77-aac8-bd04d43dc3b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.756118 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cf0597-979f-4a77-aac8-bd04d43dc3b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.788407 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.857066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-combined-ca-bundle\") pod \"c9a287cb-2074-4976-b101-cf2389212a0d\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.857180 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-config-data\") pod \"c9a287cb-2074-4976-b101-cf2389212a0d\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.857355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a287cb-2074-4976-b101-cf2389212a0d-logs\") pod \"c9a287cb-2074-4976-b101-cf2389212a0d\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.857467 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppqq\" (UniqueName: \"kubernetes.io/projected/c9a287cb-2074-4976-b101-cf2389212a0d-kube-api-access-zppqq\") pod \"c9a287cb-2074-4976-b101-cf2389212a0d\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.857556 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-nova-metadata-tls-certs\") pod \"c9a287cb-2074-4976-b101-cf2389212a0d\" (UID: \"c9a287cb-2074-4976-b101-cf2389212a0d\") " Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.857800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9a287cb-2074-4976-b101-cf2389212a0d-logs" (OuterVolumeSpecName: "logs") pod "c9a287cb-2074-4976-b101-cf2389212a0d" (UID: "c9a287cb-2074-4976-b101-cf2389212a0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.858999 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9a287cb-2074-4976-b101-cf2389212a0d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.864182 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a287cb-2074-4976-b101-cf2389212a0d-kube-api-access-zppqq" (OuterVolumeSpecName: "kube-api-access-zppqq") pod "c9a287cb-2074-4976-b101-cf2389212a0d" (UID: "c9a287cb-2074-4976-b101-cf2389212a0d"). InnerVolumeSpecName "kube-api-access-zppqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.882041 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-config-data" (OuterVolumeSpecName: "config-data") pod "c9a287cb-2074-4976-b101-cf2389212a0d" (UID: "c9a287cb-2074-4976-b101-cf2389212a0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.883769 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9a287cb-2074-4976-b101-cf2389212a0d" (UID: "c9a287cb-2074-4976-b101-cf2389212a0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.913156 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c9a287cb-2074-4976-b101-cf2389212a0d" (UID: "c9a287cb-2074-4976-b101-cf2389212a0d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.955105 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.963138 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.963222 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zppqq\" (UniqueName: \"kubernetes.io/projected/c9a287cb-2074-4976-b101-cf2389212a0d-kube-api-access-zppqq\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.963241 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:41 crc kubenswrapper[4833]: I0127 14:34:41.963258 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a287cb-2074-4976-b101-cf2389212a0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018286 4833 generic.go:334] "Generic (PLEG): container finished" podID="c9a287cb-2074-4976-b101-cf2389212a0d" containerID="cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a" exitCode=0 Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018318 4833 generic.go:334] "Generic (PLEG): container finished" podID="c9a287cb-2074-4976-b101-cf2389212a0d" containerID="1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b" exitCode=143 Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9a287cb-2074-4976-b101-cf2389212a0d","Type":"ContainerDied","Data":"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a"} Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018411 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9a287cb-2074-4976-b101-cf2389212a0d","Type":"ContainerDied","Data":"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b"} Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018421 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9a287cb-2074-4976-b101-cf2389212a0d","Type":"ContainerDied","Data":"289114868ca9bf135cbe35d882908bc7a7720d8b73629b3c8809d0bf2b1e3dc8"} Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018437 4833 scope.go:117] "RemoveContainer" containerID="cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.018860 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.020809 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08cf0597-979f-4a77-aac8-bd04d43dc3b4","Type":"ContainerDied","Data":"be21dfc42d89bef65a55b74a3a4551a6077864dce30482ac616e1a8b8fd35f0b"} Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.020871 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.023869 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdb7d742-5021-4a0e-8a0a-dc2e46906c91","Type":"ContainerStarted","Data":"1eeab57732760052e6278340bf93ea4d2218c3e0b9d95fc002af22e339cda776"} Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.043081 4833 scope.go:117] "RemoveContainer" containerID="1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.079676 4833 scope.go:117] "RemoveContainer" containerID="cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a" Jan 27 14:34:42 crc kubenswrapper[4833]: E0127 14:34:42.080142 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a\": container with ID starting with cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a not found: ID does not exist" containerID="cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.080177 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a"} err="failed to get container status \"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a\": rpc error: code = NotFound desc = could not find container \"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a\": container with ID starting with cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a not found: ID does not exist" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.080221 4833 scope.go:117] "RemoveContainer" containerID="1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.087496 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: E0127 14:34:42.091012 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b\": container with ID starting with 1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b not found: ID does not exist" containerID="1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.091060 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b"} err="failed to get container status \"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b\": rpc error: code = NotFound desc = could not find container \"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b\": container with ID starting with 1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b not found: ID does not exist" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.091092 4833 scope.go:117] "RemoveContainer" containerID="cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.095569 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a"} err="failed to get container status \"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a\": rpc error: code = NotFound desc = could not find container \"cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a\": container with ID starting with cad3ce0f1f80b924a187801edacdbeff8183bf9ea864c70815c0c5332e181a7a not found: ID does not exist" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.095616 4833 scope.go:117] "RemoveContainer" containerID="1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.096071 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b"} err="failed to get container status \"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b\": rpc error: code = NotFound desc = could not find container \"1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b\": container with ID starting with 1f85c973a0c2148a02b4281b84c633f4bbdc091f07a650734d97feec62eb793b not found: ID does not exist" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.096113 4833 scope.go:117] "RemoveContainer" containerID="73854d08a8f743b857ba0798ce05af2e9782f8e0163fc7c7ff2b80303e90b16f" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.099486 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.111555 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.123779 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.134682 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: E0127 14:34:42.135135 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-log" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.135146 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-log" Jan 27 14:34:42 crc kubenswrapper[4833]: E0127 14:34:42.135169 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cf0597-979f-4a77-aac8-bd04d43dc3b4" containerName="nova-scheduler-scheduler" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.135175 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cf0597-979f-4a77-aac8-bd04d43dc3b4" containerName="nova-scheduler-scheduler" Jan 27 14:34:42 crc kubenswrapper[4833]: E0127 14:34:42.135198 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-metadata" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.135205 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-metadata" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.135383 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cf0597-979f-4a77-aac8-bd04d43dc3b4" containerName="nova-scheduler-scheduler" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.135394 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-log" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.135408 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" containerName="nova-metadata-metadata" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.136530 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.144546 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.144982 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.155513 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.168282 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.170177 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.173429 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.184724 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.282824 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96nn\" (UniqueName: \"kubernetes.io/projected/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-kube-api-access-b96nn\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.282870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.282895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.282994 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-config-data\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.283025 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-config-data\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.283042 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41bfcfc-902f-4497-911c-c266554297fe-logs\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.283085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.283102 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7dhx\" (UniqueName: \"kubernetes.io/projected/b41bfcfc-902f-4497-911c-c266554297fe-kube-api-access-f7dhx\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.384832 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.384873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7dhx\" (UniqueName: \"kubernetes.io/projected/b41bfcfc-902f-4497-911c-c266554297fe-kube-api-access-f7dhx\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.384916 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96nn\" (UniqueName: \"kubernetes.io/projected/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-kube-api-access-b96nn\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.384941 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.384960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.385055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-config-data\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.385087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-config-data\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.385107 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41bfcfc-902f-4497-911c-c266554297fe-logs\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.385411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41bfcfc-902f-4497-911c-c266554297fe-logs\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.389023 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-config-data\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.389178 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.389272 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.392108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.392363 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-config-data\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.403690 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7dhx\" (UniqueName: \"kubernetes.io/projected/b41bfcfc-902f-4497-911c-c266554297fe-kube-api-access-f7dhx\") pod \"nova-metadata-0\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.407709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96nn\" (UniqueName: \"kubernetes.io/projected/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-kube-api-access-b96nn\") pod \"nova-scheduler-0\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " pod="openstack/nova-scheduler-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.463737 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:34:42 crc kubenswrapper[4833]: I0127 14:34:42.497902 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.008522 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:34:43 crc kubenswrapper[4833]: W0127 14:34:43.011039 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41bfcfc_902f_4497_911c_c266554297fe.slice/crio-a9ba372ee97fcb4fb416d18e8d4b4b3265f61f85a2f6c54da4659b574c30409d WatchSource:0}: Error finding container a9ba372ee97fcb4fb416d18e8d4b4b3265f61f85a2f6c54da4659b574c30409d: Status 404 returned error can't find the container with id a9ba372ee97fcb4fb416d18e8d4b4b3265f61f85a2f6c54da4659b574c30409d Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.037511 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41bfcfc-902f-4497-911c-c266554297fe","Type":"ContainerStarted","Data":"a9ba372ee97fcb4fb416d18e8d4b4b3265f61f85a2f6c54da4659b574c30409d"} Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.040105 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdb7d742-5021-4a0e-8a0a-dc2e46906c91","Type":"ContainerStarted","Data":"d47f1e663ef10649c575615729acfcd0fd466fdc25beebe3cb9f54b9d9d8c600"} Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.040170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdb7d742-5021-4a0e-8a0a-dc2e46906c91","Type":"ContainerStarted","Data":"6ff89710e397ecfd9f0deb11fc10bd5a3e1da45337fa106e52be4875ac86db3c"} Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.043391 4833 generic.go:334] "Generic (PLEG): container finished" podID="8de97b8b-2681-4c26-83d3-63e5ad9eee7b" containerID="82a4158d2509f18dda68bdb8dc457083323a5e1849a68915f75d0af07001d40b" exitCode=0 Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.043581 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-76xtr" event={"ID":"8de97b8b-2681-4c26-83d3-63e5ad9eee7b","Type":"ContainerDied","Data":"82a4158d2509f18dda68bdb8dc457083323a5e1849a68915f75d0af07001d40b"} Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.062523 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.062504923 podStartE2EDuration="2.062504923s" podCreationTimestamp="2026-01-27 14:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:43.060823283 +0000 UTC m=+1384.712147695" watchObservedRunningTime="2026-01-27 14:34:43.062504923 +0000 UTC m=+1384.713829325" Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.114017 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:34:43 crc kubenswrapper[4833]: W0127 14:34:43.119580 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cae6e39_d4ec_4eb5_968d_64120c5e81fb.slice/crio-13e512cd6c1b4c0f6214c0e5fe17aed13774ea7ce6d11bcf33eca1f2ff097346 WatchSource:0}: Error finding container 13e512cd6c1b4c0f6214c0e5fe17aed13774ea7ce6d11bcf33eca1f2ff097346: Status 404 returned error can't find the container with id 13e512cd6c1b4c0f6214c0e5fe17aed13774ea7ce6d11bcf33eca1f2ff097346 Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.236159 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cf0597-979f-4a77-aac8-bd04d43dc3b4" path="/var/lib/kubelet/pods/08cf0597-979f-4a77-aac8-bd04d43dc3b4/volumes" Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.236968 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a287cb-2074-4976-b101-cf2389212a0d" path="/var/lib/kubelet/pods/c9a287cb-2074-4976-b101-cf2389212a0d/volumes" Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.414327 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.495785 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-4cgcq"] Jan 27 14:34:43 crc kubenswrapper[4833]: I0127 14:34:43.496066 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" podUID="5959a39f-6b69-4f81-9cb8-541268073335" containerName="dnsmasq-dns" containerID="cri-o://99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd" gracePeriod=10 Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.037584 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.057004 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41bfcfc-902f-4497-911c-c266554297fe","Type":"ContainerStarted","Data":"90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396"} Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.057243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41bfcfc-902f-4497-911c-c266554297fe","Type":"ContainerStarted","Data":"5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f"} Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.059960 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cae6e39-d4ec-4eb5-968d-64120c5e81fb","Type":"ContainerStarted","Data":"e37754861d1b42d651d3664d4ef042b095643f06c4253fae84fe9b359256044d"} Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.059994 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cae6e39-d4ec-4eb5-968d-64120c5e81fb","Type":"ContainerStarted","Data":"13e512cd6c1b4c0f6214c0e5fe17aed13774ea7ce6d11bcf33eca1f2ff097346"} Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.064139 4833 generic.go:334] "Generic (PLEG): container finished" podID="5959a39f-6b69-4f81-9cb8-541268073335" containerID="99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd" exitCode=0 Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.064383 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.064572 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" event={"ID":"5959a39f-6b69-4f81-9cb8-541268073335","Type":"ContainerDied","Data":"99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd"} Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.064619 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-4cgcq" event={"ID":"5959a39f-6b69-4f81-9cb8-541268073335","Type":"ContainerDied","Data":"88294e190f14e817d86ab67714ecef5f68110a5640a79bf808d31fb64ce4f5cc"} Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.064636 4833 scope.go:117] "RemoveContainer" containerID="99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.076519 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.076501982 podStartE2EDuration="2.076501982s" podCreationTimestamp="2026-01-27 14:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:44.074037884 +0000 UTC m=+1385.725362306" watchObservedRunningTime="2026-01-27 14:34:44.076501982 +0000 UTC m=+1385.727826384" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.103716 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.103698783 podStartE2EDuration="2.103698783s" podCreationTimestamp="2026-01-27 14:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:44.097978367 +0000 UTC m=+1385.749302769" watchObservedRunningTime="2026-01-27 14:34:44.103698783 +0000 UTC m=+1385.755023175" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.136283 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-sb\") pod \"5959a39f-6b69-4f81-9cb8-541268073335\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.136434 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-swift-storage-0\") pod \"5959a39f-6b69-4f81-9cb8-541268073335\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.142846 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-nb\") pod \"5959a39f-6b69-4f81-9cb8-541268073335\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.142986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-svc\") pod \"5959a39f-6b69-4f81-9cb8-541268073335\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.143060 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-config\") pod \"5959a39f-6b69-4f81-9cb8-541268073335\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.143248 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zk6h\" (UniqueName: \"kubernetes.io/projected/5959a39f-6b69-4f81-9cb8-541268073335-kube-api-access-2zk6h\") pod \"5959a39f-6b69-4f81-9cb8-541268073335\" (UID: \"5959a39f-6b69-4f81-9cb8-541268073335\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.151731 4833 scope.go:117] "RemoveContainer" containerID="9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.183716 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5959a39f-6b69-4f81-9cb8-541268073335-kube-api-access-2zk6h" (OuterVolumeSpecName: "kube-api-access-2zk6h") pod "5959a39f-6b69-4f81-9cb8-541268073335" (UID: "5959a39f-6b69-4f81-9cb8-541268073335"). InnerVolumeSpecName "kube-api-access-2zk6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.202933 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5959a39f-6b69-4f81-9cb8-541268073335" (UID: "5959a39f-6b69-4f81-9cb8-541268073335"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.208549 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-config" (OuterVolumeSpecName: "config") pod "5959a39f-6b69-4f81-9cb8-541268073335" (UID: "5959a39f-6b69-4f81-9cb8-541268073335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.209613 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5959a39f-6b69-4f81-9cb8-541268073335" (UID: "5959a39f-6b69-4f81-9cb8-541268073335"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.225709 4833 scope.go:117] "RemoveContainer" containerID="99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd" Jan 27 14:34:44 crc kubenswrapper[4833]: E0127 14:34:44.228746 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd\": container with ID starting with 99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd not found: ID does not exist" containerID="99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.228798 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd"} err="failed to get container status \"99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd\": rpc error: code = NotFound desc = could not find container \"99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd\": container with ID starting with 99580174c6057662650ea7451d1deeb98cb15f341585b03b217216e5bae303bd not found: ID does not exist" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.228831 4833 scope.go:117] "RemoveContainer" containerID="9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564" Jan 27 14:34:44 crc kubenswrapper[4833]: E0127 14:34:44.229628 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564\": container with ID starting with 9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564 not found: ID does not exist" containerID="9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.229660 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564"} err="failed to get container status \"9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564\": rpc error: code = NotFound desc = could not find container \"9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564\": container with ID starting with 9fc13b1631656c80b9a5b86f371dd5264b4416ca48154017db34afb6b6d9a564 not found: ID does not exist" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.248903 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5959a39f-6b69-4f81-9cb8-541268073335" (UID: "5959a39f-6b69-4f81-9cb8-541268073335"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.249338 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5959a39f-6b69-4f81-9cb8-541268073335" (UID: "5959a39f-6b69-4f81-9cb8-541268073335"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.252478 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.252513 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.252525 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.252534 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.252544 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zk6h\" (UniqueName: \"kubernetes.io/projected/5959a39f-6b69-4f81-9cb8-541268073335-kube-api-access-2zk6h\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.252553 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5959a39f-6b69-4f81-9cb8-541268073335-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.417528 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-4cgcq"] Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.422019 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-4cgcq"] Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.536336 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.660631 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-combined-ca-bundle\") pod \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.660738 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-scripts\") pod \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.660866 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-config-data\") pod \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.660945 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5gvh\" (UniqueName: \"kubernetes.io/projected/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-kube-api-access-f5gvh\") pod \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\" (UID: \"8de97b8b-2681-4c26-83d3-63e5ad9eee7b\") " Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.664877 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-kube-api-access-f5gvh" (OuterVolumeSpecName: "kube-api-access-f5gvh") pod "8de97b8b-2681-4c26-83d3-63e5ad9eee7b" (UID: "8de97b8b-2681-4c26-83d3-63e5ad9eee7b"). InnerVolumeSpecName "kube-api-access-f5gvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.665124 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-scripts" (OuterVolumeSpecName: "scripts") pod "8de97b8b-2681-4c26-83d3-63e5ad9eee7b" (UID: "8de97b8b-2681-4c26-83d3-63e5ad9eee7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.691595 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-config-data" (OuterVolumeSpecName: "config-data") pod "8de97b8b-2681-4c26-83d3-63e5ad9eee7b" (UID: "8de97b8b-2681-4c26-83d3-63e5ad9eee7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.704618 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de97b8b-2681-4c26-83d3-63e5ad9eee7b" (UID: "8de97b8b-2681-4c26-83d3-63e5ad9eee7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.763668 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.763711 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.763729 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5gvh\" (UniqueName: \"kubernetes.io/projected/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-kube-api-access-f5gvh\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:44 crc kubenswrapper[4833]: I0127 14:34:44.763745 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de97b8b-2681-4c26-83d3-63e5ad9eee7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.075833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-76xtr" event={"ID":"8de97b8b-2681-4c26-83d3-63e5ad9eee7b","Type":"ContainerDied","Data":"d59b6ac7b8517e670060129998d3ad6f713e4ed09a04ddb0d012a01956680f88"} Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.076164 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d59b6ac7b8517e670060129998d3ad6f713e4ed09a04ddb0d012a01956680f88" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.076033 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-76xtr" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.157146 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:34:45 crc kubenswrapper[4833]: E0127 14:34:45.157538 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5959a39f-6b69-4f81-9cb8-541268073335" containerName="dnsmasq-dns" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.157555 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5959a39f-6b69-4f81-9cb8-541268073335" containerName="dnsmasq-dns" Jan 27 14:34:45 crc kubenswrapper[4833]: E0127 14:34:45.157567 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de97b8b-2681-4c26-83d3-63e5ad9eee7b" containerName="nova-cell1-conductor-db-sync" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.157574 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de97b8b-2681-4c26-83d3-63e5ad9eee7b" containerName="nova-cell1-conductor-db-sync" Jan 27 14:34:45 crc kubenswrapper[4833]: E0127 14:34:45.157602 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5959a39f-6b69-4f81-9cb8-541268073335" containerName="init" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.157610 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5959a39f-6b69-4f81-9cb8-541268073335" containerName="init" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.157799 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de97b8b-2681-4c26-83d3-63e5ad9eee7b" containerName="nova-cell1-conductor-db-sync" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.157821 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5959a39f-6b69-4f81-9cb8-541268073335" containerName="dnsmasq-dns" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.158429 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.163727 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.181113 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.222150 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5959a39f-6b69-4f81-9cb8-541268073335" path="/var/lib/kubelet/pods/5959a39f-6b69-4f81-9cb8-541268073335/volumes" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.273796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ce5a19-5697-4b62-8ec7-220c35fb2123-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.273990 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ce5a19-5697-4b62-8ec7-220c35fb2123-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.274044 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqf8g\" (UniqueName: \"kubernetes.io/projected/34ce5a19-5697-4b62-8ec7-220c35fb2123-kube-api-access-gqf8g\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.376467 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ce5a19-5697-4b62-8ec7-220c35fb2123-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.376560 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ce5a19-5697-4b62-8ec7-220c35fb2123-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.376589 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqf8g\" (UniqueName: \"kubernetes.io/projected/34ce5a19-5697-4b62-8ec7-220c35fb2123-kube-api-access-gqf8g\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.380919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ce5a19-5697-4b62-8ec7-220c35fb2123-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.386380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ce5a19-5697-4b62-8ec7-220c35fb2123-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.393999 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqf8g\" (UniqueName: \"kubernetes.io/projected/34ce5a19-5697-4b62-8ec7-220c35fb2123-kube-api-access-gqf8g\") pod \"nova-cell1-conductor-0\" (UID: \"34ce5a19-5697-4b62-8ec7-220c35fb2123\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:45 crc kubenswrapper[4833]: I0127 14:34:45.479047 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:46 crc kubenswrapper[4833]: I0127 14:34:46.010006 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:34:46 crc kubenswrapper[4833]: I0127 14:34:46.091224 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"34ce5a19-5697-4b62-8ec7-220c35fb2123","Type":"ContainerStarted","Data":"9b79ba559cfa93d940620e8aea160d6238685fcc5e203c333754a0e66a2e4c0e"} Jan 27 14:34:47 crc kubenswrapper[4833]: I0127 14:34:47.105288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"34ce5a19-5697-4b62-8ec7-220c35fb2123","Type":"ContainerStarted","Data":"940ddd7b3119eb2589d0cc7ad8e0793bbac4263f824267d5886926026775ec28"} Jan 27 14:34:47 crc kubenswrapper[4833]: I0127 14:34:47.105942 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:47 crc kubenswrapper[4833]: I0127 14:34:47.136783 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.136764738 podStartE2EDuration="2.136764738s" podCreationTimestamp="2026-01-27 14:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:47.127360193 +0000 UTC m=+1388.778684635" watchObservedRunningTime="2026-01-27 14:34:47.136764738 +0000 UTC m=+1388.788089150" Jan 27 14:34:47 crc kubenswrapper[4833]: I0127 14:34:47.464393 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:34:47 crc kubenswrapper[4833]: I0127 14:34:47.465869 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:34:47 crc kubenswrapper[4833]: I0127 14:34:47.498297 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:34:51 crc kubenswrapper[4833]: I0127 14:34:51.475193 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:34:51 crc kubenswrapper[4833]: I0127 14:34:51.475508 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:34:52 crc kubenswrapper[4833]: I0127 14:34:52.464682 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:34:52 crc kubenswrapper[4833]: I0127 14:34:52.465064 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:34:52 crc kubenswrapper[4833]: I0127 14:34:52.498359 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:34:52 crc kubenswrapper[4833]: I0127 14:34:52.539584 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:34:52 crc kubenswrapper[4833]: I0127 14:34:52.556691 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:52 crc kubenswrapper[4833]: I0127 14:34:52.556705 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.214:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:53 crc kubenswrapper[4833]: I0127 14:34:53.185814 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:34:53 crc kubenswrapper[4833]: I0127 14:34:53.480697 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:53 crc kubenswrapper[4833]: I0127 14:34:53.481563 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:55 crc kubenswrapper[4833]: I0127 14:34:55.511513 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 14:34:56 crc kubenswrapper[4833]: I0127 14:34:56.865533 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sz2g7"] Jan 27 14:34:56 crc kubenswrapper[4833]: I0127 14:34:56.868067 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:56 crc kubenswrapper[4833]: I0127 14:34:56.882890 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sz2g7"] Jan 27 14:34:56 crc kubenswrapper[4833]: I0127 14:34:56.983418 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-utilities\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:56 crc kubenswrapper[4833]: I0127 14:34:56.983590 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-catalog-content\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:56 crc kubenswrapper[4833]: I0127 14:34:56.983659 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkxwr\" (UniqueName: \"kubernetes.io/projected/f1f712ce-8328-4046-b783-9cdab46c4483-kube-api-access-rkxwr\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.085862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkxwr\" (UniqueName: \"kubernetes.io/projected/f1f712ce-8328-4046-b783-9cdab46c4483-kube-api-access-rkxwr\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.085929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-utilities\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.086051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-catalog-content\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.086545 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-catalog-content\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.087069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-utilities\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.113658 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkxwr\" (UniqueName: \"kubernetes.io/projected/f1f712ce-8328-4046-b783-9cdab46c4483-kube-api-access-rkxwr\") pod \"redhat-operators-sz2g7\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.194797 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:34:57 crc kubenswrapper[4833]: I0127 14:34:57.747302 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sz2g7"] Jan 27 14:34:57 crc kubenswrapper[4833]: W0127 14:34:57.755384 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f712ce_8328_4046_b783_9cdab46c4483.slice/crio-0d71bceae2cc59031e5f90831dd7184aec67671665294f8dd1a57c4e1d012036 WatchSource:0}: Error finding container 0d71bceae2cc59031e5f90831dd7184aec67671665294f8dd1a57c4e1d012036: Status 404 returned error can't find the container with id 0d71bceae2cc59031e5f90831dd7184aec67671665294f8dd1a57c4e1d012036 Jan 27 14:34:58 crc kubenswrapper[4833]: I0127 14:34:58.207733 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1f712ce-8328-4046-b783-9cdab46c4483" containerID="4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405" exitCode=0 Jan 27 14:34:58 crc kubenswrapper[4833]: I0127 14:34:58.207834 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz2g7" event={"ID":"f1f712ce-8328-4046-b783-9cdab46c4483","Type":"ContainerDied","Data":"4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405"} Jan 27 14:34:58 crc kubenswrapper[4833]: I0127 14:34:58.208065 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz2g7" event={"ID":"f1f712ce-8328-4046-b783-9cdab46c4483","Type":"ContainerStarted","Data":"0d71bceae2cc59031e5f90831dd7184aec67671665294f8dd1a57c4e1d012036"} Jan 27 14:35:00 crc kubenswrapper[4833]: I0127 14:35:00.232145 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1f712ce-8328-4046-b783-9cdab46c4483" containerID="c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707" exitCode=0 Jan 27 14:35:00 crc kubenswrapper[4833]: I0127 14:35:00.232217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz2g7" event={"ID":"f1f712ce-8328-4046-b783-9cdab46c4483","Type":"ContainerDied","Data":"c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707"} Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.479697 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.480014 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.480888 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.481105 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.483559 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.487009 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.687728 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-2cdld"] Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.691796 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.741595 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-2cdld"] Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.792717 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gjf\" (UniqueName: \"kubernetes.io/projected/b040472d-4a33-4442-bacd-23c8a68983c9-kube-api-access-s9gjf\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.792774 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.792800 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.792884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.792912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-config\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.792962 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.894875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.895391 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.895578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.895677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-config\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.895756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.895805 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.895952 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gjf\" (UniqueName: \"kubernetes.io/projected/b040472d-4a33-4442-bacd-23c8a68983c9-kube-api-access-s9gjf\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.896209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.896477 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.896921 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-config\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.897184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:01 crc kubenswrapper[4833]: I0127 14:35:01.914075 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gjf\" (UniqueName: \"kubernetes.io/projected/b040472d-4a33-4442-bacd-23c8a68983c9-kube-api-access-s9gjf\") pod \"dnsmasq-dns-5c7b6c5df9-2cdld\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.020185 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.251558 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz2g7" event={"ID":"f1f712ce-8328-4046-b783-9cdab46c4483","Type":"ContainerStarted","Data":"67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c"} Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.261208 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.261271 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.261319 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.262093 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0206b6d2836f14765d1c04bb41bfbd60d766e4b0e2c4de5107dd75cf3400e10"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.262148 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://c0206b6d2836f14765d1c04bb41bfbd60d766e4b0e2c4de5107dd75cf3400e10" gracePeriod=600 Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.325364 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sz2g7" podStartSLOduration=2.923510256 podStartE2EDuration="6.325345018s" podCreationTimestamp="2026-01-27 14:34:56 +0000 UTC" firstStartedPulling="2026-01-27 14:34:58.20964023 +0000 UTC m=+1399.860964622" lastFinishedPulling="2026-01-27 14:35:01.611474982 +0000 UTC m=+1403.262799384" observedRunningTime="2026-01-27 14:35:02.305213486 +0000 UTC m=+1403.956537898" watchObservedRunningTime="2026-01-27 14:35:02.325345018 +0000 UTC m=+1403.976669420" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.478504 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.482284 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.542153 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:35:02 crc kubenswrapper[4833]: I0127 14:35:02.800498 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-2cdld"] Jan 27 14:35:03 crc kubenswrapper[4833]: I0127 14:35:03.263257 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="c0206b6d2836f14765d1c04bb41bfbd60d766e4b0e2c4de5107dd75cf3400e10" exitCode=0 Jan 27 14:35:03 crc kubenswrapper[4833]: I0127 14:35:03.263662 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"c0206b6d2836f14765d1c04bb41bfbd60d766e4b0e2c4de5107dd75cf3400e10"} Jan 27 14:35:03 crc kubenswrapper[4833]: I0127 14:35:03.263690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce"} Jan 27 14:35:03 crc kubenswrapper[4833]: I0127 14:35:03.263708 4833 scope.go:117] "RemoveContainer" containerID="1805c559dece1ffe1bcc960333ee27cf010cdc7a9c45dfb4f0b8b1c23725f37b" Jan 27 14:35:03 crc kubenswrapper[4833]: I0127 14:35:03.265713 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" event={"ID":"b040472d-4a33-4442-bacd-23c8a68983c9","Type":"ContainerStarted","Data":"e436b5aca26a298a60b98ec925602b6de9e8849e5901a250aabdef355b64e238"} Jan 27 14:35:03 crc kubenswrapper[4833]: I0127 14:35:03.273560 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:35:04 crc kubenswrapper[4833]: I0127 14:35:04.280119 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" event={"ID":"b040472d-4a33-4442-bacd-23c8a68983c9","Type":"ContainerStarted","Data":"cd9944f75ddb37baf47cf4bc0e105e06506e50c13e35be010f15df06fcc6fabf"} Jan 27 14:35:04 crc kubenswrapper[4833]: I0127 14:35:04.332912 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:04 crc kubenswrapper[4833]: I0127 14:35:04.333121 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-log" containerID="cri-o://6ff89710e397ecfd9f0deb11fc10bd5a3e1da45337fa106e52be4875ac86db3c" gracePeriod=30 Jan 27 14:35:04 crc kubenswrapper[4833]: I0127 14:35:04.333570 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-api" containerID="cri-o://d47f1e663ef10649c575615729acfcd0fd466fdc25beebe3cb9f54b9d9d8c600" gracePeriod=30 Jan 27 14:35:05 crc kubenswrapper[4833]: I0127 14:35:05.299341 4833 generic.go:334] "Generic (PLEG): container finished" podID="b040472d-4a33-4442-bacd-23c8a68983c9" containerID="cd9944f75ddb37baf47cf4bc0e105e06506e50c13e35be010f15df06fcc6fabf" exitCode=0 Jan 27 14:35:05 crc kubenswrapper[4833]: I0127 14:35:05.299475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" event={"ID":"b040472d-4a33-4442-bacd-23c8a68983c9","Type":"ContainerDied","Data":"cd9944f75ddb37baf47cf4bc0e105e06506e50c13e35be010f15df06fcc6fabf"} Jan 27 14:35:06 crc kubenswrapper[4833]: I0127 14:35:06.317748 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" event={"ID":"b040472d-4a33-4442-bacd-23c8a68983c9","Type":"ContainerStarted","Data":"d069df7882429b82ff956a7fe3e3fc9d6730c736d8d69f57c3a40b3c584b6323"} Jan 27 14:35:06 crc kubenswrapper[4833]: I0127 14:35:06.319695 4833 generic.go:334] "Generic (PLEG): container finished" podID="494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" containerID="48a656f233b313983d5c9da49c3e668d9a2a075f62eef4e001722264d6055704" exitCode=137 Jan 27 14:35:06 crc kubenswrapper[4833]: I0127 14:35:06.319801 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4","Type":"ContainerDied","Data":"48a656f233b313983d5c9da49c3e668d9a2a075f62eef4e001722264d6055704"} Jan 27 14:35:06 crc kubenswrapper[4833]: I0127 14:35:06.326271 4833 generic.go:334] "Generic (PLEG): container finished" podID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerID="6ff89710e397ecfd9f0deb11fc10bd5a3e1da45337fa106e52be4875ac86db3c" exitCode=143 Jan 27 14:35:06 crc kubenswrapper[4833]: I0127 14:35:06.326316 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdb7d742-5021-4a0e-8a0a-dc2e46906c91","Type":"ContainerDied","Data":"6ff89710e397ecfd9f0deb11fc10bd5a3e1da45337fa106e52be4875ac86db3c"} Jan 27 14:35:06 crc kubenswrapper[4833]: I0127 14:35:06.951838 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.195253 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.195572 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.334902 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.389383 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" podStartSLOduration=6.389359803 podStartE2EDuration="6.389359803s" podCreationTimestamp="2026-01-27 14:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:07.378955543 +0000 UTC m=+1409.030279945" watchObservedRunningTime="2026-01-27 14:35:07.389359803 +0000 UTC m=+1409.040684205" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.445503 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.533474 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-combined-ca-bundle\") pod \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.533565 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-config-data\") pod \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.533603 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqp2m\" (UniqueName: \"kubernetes.io/projected/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-kube-api-access-jqp2m\") pod \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\" (UID: \"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4\") " Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.542365 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-kube-api-access-jqp2m" (OuterVolumeSpecName: "kube-api-access-jqp2m") pod "494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" (UID: "494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4"). InnerVolumeSpecName "kube-api-access-jqp2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.592944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" (UID: "494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.601758 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-config-data" (OuterVolumeSpecName: "config-data") pod "494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" (UID: "494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.640041 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.640076 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:07 crc kubenswrapper[4833]: I0127 14:35:07.640088 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqp2m\" (UniqueName: \"kubernetes.io/projected/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4-kube-api-access-jqp2m\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.291154 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sz2g7" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="registry-server" probeResult="failure" output=< Jan 27 14:35:08 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:35:08 crc kubenswrapper[4833]: > Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.355063 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4","Type":"ContainerDied","Data":"111e12feeadbee3934fa23d28b0655277fcbebe9957d2ca0cdbfe206802f0565"} Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.355109 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.355123 4833 scope.go:117] "RemoveContainer" containerID="48a656f233b313983d5c9da49c3e668d9a2a075f62eef4e001722264d6055704" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.365956 4833 generic.go:334] "Generic (PLEG): container finished" podID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerID="d47f1e663ef10649c575615729acfcd0fd466fdc25beebe3cb9f54b9d9d8c600" exitCode=0 Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.366030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdb7d742-5021-4a0e-8a0a-dc2e46906c91","Type":"ContainerDied","Data":"d47f1e663ef10649c575615729acfcd0fd466fdc25beebe3cb9f54b9d9d8c600"} Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.426764 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.448164 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.474583 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:35:08 crc kubenswrapper[4833]: E0127 14:35:08.475077 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.475093 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.475290 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.476224 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.478223 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.480255 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.480352 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.489790 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.574707 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.574803 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.574828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.574949 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.574979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhx47\" (UniqueName: \"kubernetes.io/projected/81be4ddd-ca35-4605-a032-96d22c32ffca-kube-api-access-vhx47\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.676549 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.676615 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhx47\" (UniqueName: \"kubernetes.io/projected/81be4ddd-ca35-4605-a032-96d22c32ffca-kube-api-access-vhx47\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.676648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.677377 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.677431 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.682682 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.682828 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.687022 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.687853 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81be4ddd-ca35-4605-a032-96d22c32ffca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.694282 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhx47\" (UniqueName: \"kubernetes.io/projected/81be4ddd-ca35-4605-a032-96d22c32ffca-kube-api-access-vhx47\") pod \"nova-cell1-novncproxy-0\" (UID: \"81be4ddd-ca35-4605-a032-96d22c32ffca\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.793434 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.796392 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.882013 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-config-data\") pod \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.882533 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-combined-ca-bundle\") pod \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.882579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-logs\") pod \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.882610 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfplv\" (UniqueName: \"kubernetes.io/projected/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-kube-api-access-kfplv\") pod \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\" (UID: \"bdb7d742-5021-4a0e-8a0a-dc2e46906c91\") " Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.883420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-logs" (OuterVolumeSpecName: "logs") pod "bdb7d742-5021-4a0e-8a0a-dc2e46906c91" (UID: "bdb7d742-5021-4a0e-8a0a-dc2e46906c91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.896367 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-kube-api-access-kfplv" (OuterVolumeSpecName: "kube-api-access-kfplv") pod "bdb7d742-5021-4a0e-8a0a-dc2e46906c91" (UID: "bdb7d742-5021-4a0e-8a0a-dc2e46906c91"). InnerVolumeSpecName "kube-api-access-kfplv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.930873 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdb7d742-5021-4a0e-8a0a-dc2e46906c91" (UID: "bdb7d742-5021-4a0e-8a0a-dc2e46906c91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.950661 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-config-data" (OuterVolumeSpecName: "config-data") pod "bdb7d742-5021-4a0e-8a0a-dc2e46906c91" (UID: "bdb7d742-5021-4a0e-8a0a-dc2e46906c91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.995874 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.995901 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.995912 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfplv\" (UniqueName: \"kubernetes.io/projected/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-kube-api-access-kfplv\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:08 crc kubenswrapper[4833]: I0127 14:35:08.995923 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb7d742-5021-4a0e-8a0a-dc2e46906c91-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.156478 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.156760 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-central-agent" containerID="cri-o://3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb" gracePeriod=30 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.156880 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="proxy-httpd" containerID="cri-o://f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b" gracePeriod=30 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.156924 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="sg-core" containerID="cri-o://46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613" gracePeriod=30 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.156970 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-notification-agent" containerID="cri-o://c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a" gracePeriod=30 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.235902 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4" path="/var/lib/kubelet/pods/494cb5a3-c6a9-47d2-97d2-76f2f20cc0a4/volumes" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.387203 4833 generic.go:334] "Generic (PLEG): container finished" podID="b2a045db-b852-47be-bc8d-c59a882a791d" containerID="f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b" exitCode=0 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.387243 4833 generic.go:334] "Generic (PLEG): container finished" podID="b2a045db-b852-47be-bc8d-c59a882a791d" containerID="46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613" exitCode=2 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.387288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerDied","Data":"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b"} Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.387365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerDied","Data":"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613"} Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.391720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bdb7d742-5021-4a0e-8a0a-dc2e46906c91","Type":"ContainerDied","Data":"1eeab57732760052e6278340bf93ea4d2218c3e0b9d95fc002af22e339cda776"} Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.391786 4833 scope.go:117] "RemoveContainer" containerID="d47f1e663ef10649c575615729acfcd0fd466fdc25beebe3cb9f54b9d9d8c600" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.391801 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.418770 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.450095 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.462908 4833 scope.go:117] "RemoveContainer" containerID="6ff89710e397ecfd9f0deb11fc10bd5a3e1da45337fa106e52be4875ac86db3c" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.473147 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:09 crc kubenswrapper[4833]: W0127 14:35:09.476337 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81be4ddd_ca35_4605_a032_96d22c32ffca.slice/crio-f45ba5dcd1cfc50bafd0d018205a5a8a337d399dd99f9ba84e949a3ede10f643 WatchSource:0}: Error finding container f45ba5dcd1cfc50bafd0d018205a5a8a337d399dd99f9ba84e949a3ede10f643: Status 404 returned error can't find the container with id f45ba5dcd1cfc50bafd0d018205a5a8a337d399dd99f9ba84e949a3ede10f643 Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.491506 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:09 crc kubenswrapper[4833]: E0127 14:35:09.492238 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-log" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.492259 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-log" Jan 27 14:35:09 crc kubenswrapper[4833]: E0127 14:35:09.492292 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-api" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.492298 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-api" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.492492 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-api" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.492518 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" containerName="nova-api-log" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.493674 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.498537 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.500861 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.500863 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.520690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.611710 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.611761 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-public-tls-certs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.611807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.611916 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8379b0d-848e-48f1-a61a-7cd40e578281-logs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.612027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2jqp\" (UniqueName: \"kubernetes.io/projected/c8379b0d-848e-48f1-a61a-7cd40e578281-kube-api-access-t2jqp\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.612303 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-config-data\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.714630 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2jqp\" (UniqueName: \"kubernetes.io/projected/c8379b0d-848e-48f1-a61a-7cd40e578281-kube-api-access-t2jqp\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.714725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-config-data\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.714752 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.714783 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-public-tls-certs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.714811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.715812 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8379b0d-848e-48f1-a61a-7cd40e578281-logs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.715895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8379b0d-848e-48f1-a61a-7cd40e578281-logs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.720335 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.721389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-public-tls-certs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.723756 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-config-data\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.723949 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.734133 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2jqp\" (UniqueName: \"kubernetes.io/projected/c8379b0d-848e-48f1-a61a-7cd40e578281-kube-api-access-t2jqp\") pod \"nova-api-0\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " pod="openstack/nova-api-0" Jan 27 14:35:09 crc kubenswrapper[4833]: I0127 14:35:09.823529 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.154267 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.227185 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-sg-core-conf-yaml\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.227510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92n6s\" (UniqueName: \"kubernetes.io/projected/b2a045db-b852-47be-bc8d-c59a882a791d-kube-api-access-92n6s\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.227664 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-log-httpd\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.227805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-ceilometer-tls-certs\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.228062 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-run-httpd\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.228151 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-config-data\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.228266 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-scripts\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.228383 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-combined-ca-bundle\") pod \"b2a045db-b852-47be-bc8d-c59a882a791d\" (UID: \"b2a045db-b852-47be-bc8d-c59a882a791d\") " Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.230726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.234164 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-scripts" (OuterVolumeSpecName: "scripts") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.234925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2a045db-b852-47be-bc8d-c59a882a791d-kube-api-access-92n6s" (OuterVolumeSpecName: "kube-api-access-92n6s") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "kube-api-access-92n6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.235115 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.274715 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.330351 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.330376 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.330385 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.330398 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92n6s\" (UniqueName: \"kubernetes.io/projected/b2a045db-b852-47be-bc8d-c59a882a791d-kube-api-access-92n6s\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.330408 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2a045db-b852-47be-bc8d-c59a882a791d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.342938 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.347525 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.365894 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-config-data" (OuterVolumeSpecName: "config-data") pod "b2a045db-b852-47be-bc8d-c59a882a791d" (UID: "b2a045db-b852-47be-bc8d-c59a882a791d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.369930 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.402150 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81be4ddd-ca35-4605-a032-96d22c32ffca","Type":"ContainerStarted","Data":"572160e41daacc9db64d3505fe09b21a80d87e21dab1361a37d2963409db3d0d"} Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.402197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81be4ddd-ca35-4605-a032-96d22c32ffca","Type":"ContainerStarted","Data":"f45ba5dcd1cfc50bafd0d018205a5a8a337d399dd99f9ba84e949a3ede10f643"} Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407526 4833 generic.go:334] "Generic (PLEG): container finished" podID="b2a045db-b852-47be-bc8d-c59a882a791d" containerID="c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a" exitCode=0 Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407556 4833 generic.go:334] "Generic (PLEG): container finished" podID="b2a045db-b852-47be-bc8d-c59a882a791d" containerID="3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb" exitCode=0 Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407577 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerDied","Data":"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a"} Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407621 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerDied","Data":"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb"} Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407631 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2a045db-b852-47be-bc8d-c59a882a791d","Type":"ContainerDied","Data":"6745810b406f7d1c531e113a8a2a55823cfaafb390da11ac7191c12112b3ff63"} Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.407645 4833 scope.go:117] "RemoveContainer" containerID="f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.409182 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8379b0d-848e-48f1-a61a-7cd40e578281","Type":"ContainerStarted","Data":"1f306902e61653c406153952af2aeeefd1e08924f15dee963cb61c94289e61ad"} Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.424483 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.424462685 podStartE2EDuration="2.424462685s" podCreationTimestamp="2026-01-27 14:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:10.421065444 +0000 UTC m=+1412.072389846" watchObservedRunningTime="2026-01-27 14:35:10.424462685 +0000 UTC m=+1412.075787087" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.439785 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.439817 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.439828 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a045db-b852-47be-bc8d-c59a882a791d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.458076 4833 scope.go:117] "RemoveContainer" containerID="46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.462035 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.487101 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.496304 4833 scope.go:117] "RemoveContainer" containerID="c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.499533 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.500087 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-central-agent" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500111 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-central-agent" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.500143 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="proxy-httpd" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500152 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="proxy-httpd" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.500175 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="sg-core" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500183 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="sg-core" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.500195 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-notification-agent" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500203 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-notification-agent" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500474 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-central-agent" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500512 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="proxy-httpd" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500534 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="ceilometer-notification-agent" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.500554 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" containerName="sg-core" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.502720 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.505061 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.506299 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.506413 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.512801 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.545786 4833 scope.go:117] "RemoveContainer" containerID="3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.583608 4833 scope.go:117] "RemoveContainer" containerID="f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.594946 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b\": container with ID starting with f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b not found: ID does not exist" containerID="f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.594999 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b"} err="failed to get container status \"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b\": rpc error: code = NotFound desc = could not find container \"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b\": container with ID starting with f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.595038 4833 scope.go:117] "RemoveContainer" containerID="46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.596066 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613\": container with ID starting with 46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613 not found: ID does not exist" containerID="46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.596133 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613"} err="failed to get container status \"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613\": rpc error: code = NotFound desc = could not find container \"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613\": container with ID starting with 46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613 not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.596154 4833 scope.go:117] "RemoveContainer" containerID="c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.601626 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a\": container with ID starting with c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a not found: ID does not exist" containerID="c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.601671 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a"} err="failed to get container status \"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a\": rpc error: code = NotFound desc = could not find container \"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a\": container with ID starting with c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.601699 4833 scope.go:117] "RemoveContainer" containerID="3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb" Jan 27 14:35:10 crc kubenswrapper[4833]: E0127 14:35:10.602385 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb\": container with ID starting with 3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb not found: ID does not exist" containerID="3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.602441 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb"} err="failed to get container status \"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb\": rpc error: code = NotFound desc = could not find container \"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb\": container with ID starting with 3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.602494 4833 scope.go:117] "RemoveContainer" containerID="f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.602746 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b"} err="failed to get container status \"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b\": rpc error: code = NotFound desc = could not find container \"f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b\": container with ID starting with f775dfb915d6d6aac07799c7f9ac5665ebad4f9f6bc718332a4de7edbe812e2b not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.602771 4833 scope.go:117] "RemoveContainer" containerID="46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.603211 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613"} err="failed to get container status \"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613\": rpc error: code = NotFound desc = could not find container \"46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613\": container with ID starting with 46b9b1751270f202ab3735ce987fe483786fdfbdad32ad3249c31c693fdb9613 not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.603232 4833 scope.go:117] "RemoveContainer" containerID="c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.603411 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a"} err="failed to get container status \"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a\": rpc error: code = NotFound desc = could not find container \"c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a\": container with ID starting with c0bdaba6f05c0d7a59a1281f7bac24d657f5131cf6a83b6998c85662587e755a not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.603430 4833 scope.go:117] "RemoveContainer" containerID="3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.603736 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb"} err="failed to get container status \"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb\": rpc error: code = NotFound desc = could not find container \"3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb\": container with ID starting with 3789c86d7e10b77380ee40461522c2cc5aa18ee92148b21b4413064d241fd3bb not found: ID does not exist" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644544 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644613 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb7f2233-9721-49dd-b25b-eb5dcaa69303-log-httpd\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644640 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-scripts\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwtjr\" (UniqueName: \"kubernetes.io/projected/fb7f2233-9721-49dd-b25b-eb5dcaa69303-kube-api-access-jwtjr\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-config-data\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644806 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644826 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb7f2233-9721-49dd-b25b-eb5dcaa69303-run-httpd\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.644855 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwtjr\" (UniqueName: \"kubernetes.io/projected/fb7f2233-9721-49dd-b25b-eb5dcaa69303-kube-api-access-jwtjr\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-config-data\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747355 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb7f2233-9721-49dd-b25b-eb5dcaa69303-run-httpd\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb7f2233-9721-49dd-b25b-eb5dcaa69303-log-httpd\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.747498 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-scripts\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.748179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb7f2233-9721-49dd-b25b-eb5dcaa69303-run-httpd\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.748316 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb7f2233-9721-49dd-b25b-eb5dcaa69303-log-httpd\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.751247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-scripts\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.756925 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.757113 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.758078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.759643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7f2233-9721-49dd-b25b-eb5dcaa69303-config-data\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.769271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwtjr\" (UniqueName: \"kubernetes.io/projected/fb7f2233-9721-49dd-b25b-eb5dcaa69303-kube-api-access-jwtjr\") pod \"ceilometer-0\" (UID: \"fb7f2233-9721-49dd-b25b-eb5dcaa69303\") " pod="openstack/ceilometer-0" Jan 27 14:35:10 crc kubenswrapper[4833]: I0127 14:35:10.846376 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.222891 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2a045db-b852-47be-bc8d-c59a882a791d" path="/var/lib/kubelet/pods/b2a045db-b852-47be-bc8d-c59a882a791d/volumes" Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.224400 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdb7d742-5021-4a0e-8a0a-dc2e46906c91" path="/var/lib/kubelet/pods/bdb7d742-5021-4a0e-8a0a-dc2e46906c91/volumes" Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.358209 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.420902 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb7f2233-9721-49dd-b25b-eb5dcaa69303","Type":"ContainerStarted","Data":"2c5aa0638890375c49eecff877b01ee8278eae3a5244c5ee3f5203fe4cbead8b"} Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.424198 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8379b0d-848e-48f1-a61a-7cd40e578281","Type":"ContainerStarted","Data":"32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d"} Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.424240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8379b0d-848e-48f1-a61a-7cd40e578281","Type":"ContainerStarted","Data":"8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4"} Jan 27 14:35:11 crc kubenswrapper[4833]: I0127 14:35:11.465225 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.465206356 podStartE2EDuration="2.465206356s" podCreationTimestamp="2026-01-27 14:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:11.449496 +0000 UTC m=+1413.100820402" watchObservedRunningTime="2026-01-27 14:35:11.465206356 +0000 UTC m=+1413.116530758" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.022614 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.087036 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-rn5kw"] Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.087523 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" podUID="13ed824c-8836-4646-9a33-8f65299b3201" containerName="dnsmasq-dns" containerID="cri-o://8052d4bee7bedc4be252ed2b5835fb964eb506bcbb96ecf0ea86b480c41c0c11" gracePeriod=10 Jan 27 14:35:12 crc kubenswrapper[4833]: E0127 14:35:12.176775 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13ed824c_8836_4646_9a33_8f65299b3201.slice/crio-8052d4bee7bedc4be252ed2b5835fb964eb506bcbb96ecf0ea86b480c41c0c11.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.451582 4833 generic.go:334] "Generic (PLEG): container finished" podID="13ed824c-8836-4646-9a33-8f65299b3201" containerID="8052d4bee7bedc4be252ed2b5835fb964eb506bcbb96ecf0ea86b480c41c0c11" exitCode=0 Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.452596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" event={"ID":"13ed824c-8836-4646-9a33-8f65299b3201","Type":"ContainerDied","Data":"8052d4bee7bedc4be252ed2b5835fb964eb506bcbb96ecf0ea86b480c41c0c11"} Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.459130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb7f2233-9721-49dd-b25b-eb5dcaa69303","Type":"ContainerStarted","Data":"96317caacdfa14481e76a43aaf2abf6f651c523cd9cae585aee6a6e709c559b0"} Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.627578 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.801579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-svc\") pod \"13ed824c-8836-4646-9a33-8f65299b3201\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.801996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-swift-storage-0\") pod \"13ed824c-8836-4646-9a33-8f65299b3201\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.802027 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmc59\" (UniqueName: \"kubernetes.io/projected/13ed824c-8836-4646-9a33-8f65299b3201-kube-api-access-lmc59\") pod \"13ed824c-8836-4646-9a33-8f65299b3201\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.802093 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-sb\") pod \"13ed824c-8836-4646-9a33-8f65299b3201\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.802202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-config\") pod \"13ed824c-8836-4646-9a33-8f65299b3201\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.802227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-nb\") pod \"13ed824c-8836-4646-9a33-8f65299b3201\" (UID: \"13ed824c-8836-4646-9a33-8f65299b3201\") " Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.806425 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ed824c-8836-4646-9a33-8f65299b3201-kube-api-access-lmc59" (OuterVolumeSpecName: "kube-api-access-lmc59") pod "13ed824c-8836-4646-9a33-8f65299b3201" (UID: "13ed824c-8836-4646-9a33-8f65299b3201"). InnerVolumeSpecName "kube-api-access-lmc59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.852035 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13ed824c-8836-4646-9a33-8f65299b3201" (UID: "13ed824c-8836-4646-9a33-8f65299b3201"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.862925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13ed824c-8836-4646-9a33-8f65299b3201" (UID: "13ed824c-8836-4646-9a33-8f65299b3201"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.863155 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-config" (OuterVolumeSpecName: "config") pod "13ed824c-8836-4646-9a33-8f65299b3201" (UID: "13ed824c-8836-4646-9a33-8f65299b3201"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.863733 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13ed824c-8836-4646-9a33-8f65299b3201" (UID: "13ed824c-8836-4646-9a33-8f65299b3201"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.903775 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13ed824c-8836-4646-9a33-8f65299b3201" (UID: "13ed824c-8836-4646-9a33-8f65299b3201"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.904283 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.904310 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmc59\" (UniqueName: \"kubernetes.io/projected/13ed824c-8836-4646-9a33-8f65299b3201-kube-api-access-lmc59\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.904326 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.904340 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.904351 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:12 crc kubenswrapper[4833]: I0127 14:35:12.904366 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13ed824c-8836-4646-9a33-8f65299b3201-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.619095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb7f2233-9721-49dd-b25b-eb5dcaa69303","Type":"ContainerStarted","Data":"a0e37bb4480878d733bf7f213b9d8943a6119a9663a22a95005ff8c77d1b466d"} Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.627466 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" event={"ID":"13ed824c-8836-4646-9a33-8f65299b3201","Type":"ContainerDied","Data":"71409c93ef06e964402f2955c2a92197eb56aa22acb106425d54b215889bbb35"} Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.627533 4833 scope.go:117] "RemoveContainer" containerID="8052d4bee7bedc4be252ed2b5835fb964eb506bcbb96ecf0ea86b480c41c0c11" Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.627697 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-rn5kw" Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.658081 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-rn5kw"] Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.667050 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-rn5kw"] Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.806099 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:13 crc kubenswrapper[4833]: I0127 14:35:13.815288 4833 scope.go:117] "RemoveContainer" containerID="fc01118492213c26731532d82e5cbedc26e286f21a0f2f08c8aa961011c92e3a" Jan 27 14:35:14 crc kubenswrapper[4833]: I0127 14:35:14.698260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb7f2233-9721-49dd-b25b-eb5dcaa69303","Type":"ContainerStarted","Data":"8cd19aa3014d09bf58f02db1ed22d385b5dcc0d159cf0e80948ceee6c8a91e94"} Jan 27 14:35:15 crc kubenswrapper[4833]: I0127 14:35:15.220643 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ed824c-8836-4646-9a33-8f65299b3201" path="/var/lib/kubelet/pods/13ed824c-8836-4646-9a33-8f65299b3201/volumes" Jan 27 14:35:15 crc kubenswrapper[4833]: I0127 14:35:15.712523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb7f2233-9721-49dd-b25b-eb5dcaa69303","Type":"ContainerStarted","Data":"5d047aec2b7822111a505f001587ec22028abd199f11e83d50168082cb5de639"} Jan 27 14:35:15 crc kubenswrapper[4833]: I0127 14:35:15.714520 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:35:17 crc kubenswrapper[4833]: I0127 14:35:17.247959 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:35:17 crc kubenswrapper[4833]: I0127 14:35:17.269953 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.210080699 podStartE2EDuration="7.269934539s" podCreationTimestamp="2026-01-27 14:35:10 +0000 UTC" firstStartedPulling="2026-01-27 14:35:11.354800003 +0000 UTC m=+1413.006124405" lastFinishedPulling="2026-01-27 14:35:15.414653823 +0000 UTC m=+1417.065978245" observedRunningTime="2026-01-27 14:35:15.736353503 +0000 UTC m=+1417.387677905" watchObservedRunningTime="2026-01-27 14:35:17.269934539 +0000 UTC m=+1418.921258941" Jan 27 14:35:17 crc kubenswrapper[4833]: I0127 14:35:17.296133 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:35:17 crc kubenswrapper[4833]: I0127 14:35:17.483927 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sz2g7"] Jan 27 14:35:18 crc kubenswrapper[4833]: I0127 14:35:18.739014 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sz2g7" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="registry-server" containerID="cri-o://67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c" gracePeriod=2 Jan 27 14:35:18 crc kubenswrapper[4833]: I0127 14:35:18.797493 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:18 crc kubenswrapper[4833]: I0127 14:35:18.815082 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.236570 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.283731 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-utilities\") pod \"f1f712ce-8328-4046-b783-9cdab46c4483\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.283791 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkxwr\" (UniqueName: \"kubernetes.io/projected/f1f712ce-8328-4046-b783-9cdab46c4483-kube-api-access-rkxwr\") pod \"f1f712ce-8328-4046-b783-9cdab46c4483\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.283852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-catalog-content\") pod \"f1f712ce-8328-4046-b783-9cdab46c4483\" (UID: \"f1f712ce-8328-4046-b783-9cdab46c4483\") " Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.287168 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-utilities" (OuterVolumeSpecName: "utilities") pod "f1f712ce-8328-4046-b783-9cdab46c4483" (UID: "f1f712ce-8328-4046-b783-9cdab46c4483"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.296117 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f712ce-8328-4046-b783-9cdab46c4483-kube-api-access-rkxwr" (OuterVolumeSpecName: "kube-api-access-rkxwr") pod "f1f712ce-8328-4046-b783-9cdab46c4483" (UID: "f1f712ce-8328-4046-b783-9cdab46c4483"). InnerVolumeSpecName "kube-api-access-rkxwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.387487 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.387532 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkxwr\" (UniqueName: \"kubernetes.io/projected/f1f712ce-8328-4046-b783-9cdab46c4483-kube-api-access-rkxwr\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.412357 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1f712ce-8328-4046-b783-9cdab46c4483" (UID: "f1f712ce-8328-4046-b783-9cdab46c4483"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.488340 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f712ce-8328-4046-b783-9cdab46c4483-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.750419 4833 generic.go:334] "Generic (PLEG): container finished" podID="f1f712ce-8328-4046-b783-9cdab46c4483" containerID="67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c" exitCode=0 Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.751697 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sz2g7" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.753524 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz2g7" event={"ID":"f1f712ce-8328-4046-b783-9cdab46c4483","Type":"ContainerDied","Data":"67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c"} Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.753580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sz2g7" event={"ID":"f1f712ce-8328-4046-b783-9cdab46c4483","Type":"ContainerDied","Data":"0d71bceae2cc59031e5f90831dd7184aec67671665294f8dd1a57c4e1d012036"} Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.753603 4833 scope.go:117] "RemoveContainer" containerID="67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.777419 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.781791 4833 scope.go:117] "RemoveContainer" containerID="c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.800232 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sz2g7"] Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.815059 4833 scope.go:117] "RemoveContainer" containerID="4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.824924 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.824959 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.830185 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sz2g7"] Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.927837 4833 scope.go:117] "RemoveContainer" containerID="67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.928565 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c\": container with ID starting with 67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c not found: ID does not exist" containerID="67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.928596 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c"} err="failed to get container status \"67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c\": rpc error: code = NotFound desc = could not find container \"67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c\": container with ID starting with 67ad255a832bfb3ea0bd282b667455c7109a056b93fa8948e11a15f82438022c not found: ID does not exist" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.928616 4833 scope.go:117] "RemoveContainer" containerID="c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.929020 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707\": container with ID starting with c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707 not found: ID does not exist" containerID="c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.929071 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707"} err="failed to get container status \"c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707\": rpc error: code = NotFound desc = could not find container \"c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707\": container with ID starting with c662a8562812f16729e3fcf01ac8597dcaf16ad4b1957d9b7edf6a8b36f5f707 not found: ID does not exist" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.929084 4833 scope.go:117] "RemoveContainer" containerID="4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.929292 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405\": container with ID starting with 4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405 not found: ID does not exist" containerID="4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.929311 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405"} err="failed to get container status \"4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405\": rpc error: code = NotFound desc = could not find container \"4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405\": container with ID starting with 4b6ef57d48fe68c6b0a6409209828b0cdb712ae8bf74d9a36528cdc1d040f405 not found: ID does not exist" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.995481 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jpzsm"] Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.995896 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="extract-utilities" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.995913 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="extract-utilities" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.995934 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="registry-server" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.995942 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="registry-server" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.995966 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ed824c-8836-4646-9a33-8f65299b3201" containerName="dnsmasq-dns" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.995973 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ed824c-8836-4646-9a33-8f65299b3201" containerName="dnsmasq-dns" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.995985 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ed824c-8836-4646-9a33-8f65299b3201" containerName="init" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.995990 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ed824c-8836-4646-9a33-8f65299b3201" containerName="init" Jan 27 14:35:19 crc kubenswrapper[4833]: E0127 14:35:19.996002 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="extract-content" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.996009 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="extract-content" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.996171 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ed824c-8836-4646-9a33-8f65299b3201" containerName="dnsmasq-dns" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.996182 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" containerName="registry-server" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.996891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.997595 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.997636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l95b7\" (UniqueName: \"kubernetes.io/projected/211af2ba-ae31-4ecd-9063-b277bffb42b7-kube-api-access-l95b7\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.997690 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-config-data\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:19 crc kubenswrapper[4833]: I0127 14:35:19.997761 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-scripts\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.001392 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.001435 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.019558 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jpzsm"] Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.099285 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-config-data\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.099376 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-scripts\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.099593 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.099736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l95b7\" (UniqueName: \"kubernetes.io/projected/211af2ba-ae31-4ecd-9063-b277bffb42b7-kube-api-access-l95b7\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.107197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-config-data\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.108905 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-scripts\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.111015 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.119430 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l95b7\" (UniqueName: \"kubernetes.io/projected/211af2ba-ae31-4ecd-9063-b277bffb42b7-kube-api-access-l95b7\") pod \"nova-cell1-cell-mapping-jpzsm\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.330846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.838630 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.838655 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:35:20 crc kubenswrapper[4833]: I0127 14:35:20.844106 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jpzsm"] Jan 27 14:35:20 crc kubenswrapper[4833]: W0127 14:35:20.845979 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod211af2ba_ae31_4ecd_9063_b277bffb42b7.slice/crio-ac06eecfd66b6a2619bad98177d8e0910b25e836af8c5c23e71c49e7bc69855a WatchSource:0}: Error finding container ac06eecfd66b6a2619bad98177d8e0910b25e836af8c5c23e71c49e7bc69855a: Status 404 returned error can't find the container with id ac06eecfd66b6a2619bad98177d8e0910b25e836af8c5c23e71c49e7bc69855a Jan 27 14:35:21 crc kubenswrapper[4833]: I0127 14:35:21.221630 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f712ce-8328-4046-b783-9cdab46c4483" path="/var/lib/kubelet/pods/f1f712ce-8328-4046-b783-9cdab46c4483/volumes" Jan 27 14:35:21 crc kubenswrapper[4833]: I0127 14:35:21.780653 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jpzsm" event={"ID":"211af2ba-ae31-4ecd-9063-b277bffb42b7","Type":"ContainerStarted","Data":"3c25a67d1993a9ec71f11e0b2f16b1ae7b21b3fd3a86528a3ae7709965ecb36c"} Jan 27 14:35:21 crc kubenswrapper[4833]: I0127 14:35:21.780700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jpzsm" event={"ID":"211af2ba-ae31-4ecd-9063-b277bffb42b7","Type":"ContainerStarted","Data":"ac06eecfd66b6a2619bad98177d8e0910b25e836af8c5c23e71c49e7bc69855a"} Jan 27 14:35:21 crc kubenswrapper[4833]: I0127 14:35:21.808901 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jpzsm" podStartSLOduration=2.808873555 podStartE2EDuration="2.808873555s" podCreationTimestamp="2026-01-27 14:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:21.797426421 +0000 UTC m=+1423.448750843" watchObservedRunningTime="2026-01-27 14:35:21.808873555 +0000 UTC m=+1423.460197977" Jan 27 14:35:26 crc kubenswrapper[4833]: I0127 14:35:26.837123 4833 generic.go:334] "Generic (PLEG): container finished" podID="211af2ba-ae31-4ecd-9063-b277bffb42b7" containerID="3c25a67d1993a9ec71f11e0b2f16b1ae7b21b3fd3a86528a3ae7709965ecb36c" exitCode=0 Jan 27 14:35:26 crc kubenswrapper[4833]: I0127 14:35:26.837361 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jpzsm" event={"ID":"211af2ba-ae31-4ecd-9063-b277bffb42b7","Type":"ContainerDied","Data":"3c25a67d1993a9ec71f11e0b2f16b1ae7b21b3fd3a86528a3ae7709965ecb36c"} Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.320849 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.464935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-scripts\") pod \"211af2ba-ae31-4ecd-9063-b277bffb42b7\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.464980 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-config-data\") pod \"211af2ba-ae31-4ecd-9063-b277bffb42b7\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.465095 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-combined-ca-bundle\") pod \"211af2ba-ae31-4ecd-9063-b277bffb42b7\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.465143 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l95b7\" (UniqueName: \"kubernetes.io/projected/211af2ba-ae31-4ecd-9063-b277bffb42b7-kube-api-access-l95b7\") pod \"211af2ba-ae31-4ecd-9063-b277bffb42b7\" (UID: \"211af2ba-ae31-4ecd-9063-b277bffb42b7\") " Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.470370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211af2ba-ae31-4ecd-9063-b277bffb42b7-kube-api-access-l95b7" (OuterVolumeSpecName: "kube-api-access-l95b7") pod "211af2ba-ae31-4ecd-9063-b277bffb42b7" (UID: "211af2ba-ae31-4ecd-9063-b277bffb42b7"). InnerVolumeSpecName "kube-api-access-l95b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.479754 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-scripts" (OuterVolumeSpecName: "scripts") pod "211af2ba-ae31-4ecd-9063-b277bffb42b7" (UID: "211af2ba-ae31-4ecd-9063-b277bffb42b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.495206 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-config-data" (OuterVolumeSpecName: "config-data") pod "211af2ba-ae31-4ecd-9063-b277bffb42b7" (UID: "211af2ba-ae31-4ecd-9063-b277bffb42b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.495875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "211af2ba-ae31-4ecd-9063-b277bffb42b7" (UID: "211af2ba-ae31-4ecd-9063-b277bffb42b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.567862 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.567899 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l95b7\" (UniqueName: \"kubernetes.io/projected/211af2ba-ae31-4ecd-9063-b277bffb42b7-kube-api-access-l95b7\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.567910 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.567919 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/211af2ba-ae31-4ecd-9063-b277bffb42b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.857906 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jpzsm" event={"ID":"211af2ba-ae31-4ecd-9063-b277bffb42b7","Type":"ContainerDied","Data":"ac06eecfd66b6a2619bad98177d8e0910b25e836af8c5c23e71c49e7bc69855a"} Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.857949 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac06eecfd66b6a2619bad98177d8e0910b25e836af8c5c23e71c49e7bc69855a" Jan 27 14:35:28 crc kubenswrapper[4833]: I0127 14:35:28.857968 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jpzsm" Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.049486 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.050082 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-log" containerID="cri-o://8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4" gracePeriod=30 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.050867 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-api" containerID="cri-o://32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d" gracePeriod=30 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.071241 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.071548 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4cae6e39-d4ec-4eb5-968d-64120c5e81fb" containerName="nova-scheduler-scheduler" containerID="cri-o://e37754861d1b42d651d3664d4ef042b095643f06c4253fae84fe9b359256044d" gracePeriod=30 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.097203 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.097427 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-log" containerID="cri-o://5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f" gracePeriod=30 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.097561 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-metadata" containerID="cri-o://90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396" gracePeriod=30 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.873282 4833 generic.go:334] "Generic (PLEG): container finished" podID="b41bfcfc-902f-4497-911c-c266554297fe" containerID="5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f" exitCode=143 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.873371 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41bfcfc-902f-4497-911c-c266554297fe","Type":"ContainerDied","Data":"5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f"} Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.879605 4833 generic.go:334] "Generic (PLEG): container finished" podID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerID="8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4" exitCode=143 Jan 27 14:35:29 crc kubenswrapper[4833]: I0127 14:35:29.879658 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8379b0d-848e-48f1-a61a-7cd40e578281","Type":"ContainerDied","Data":"8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4"} Jan 27 14:35:31 crc kubenswrapper[4833]: I0127 14:35:31.911371 4833 generic.go:334] "Generic (PLEG): container finished" podID="4cae6e39-d4ec-4eb5-968d-64120c5e81fb" containerID="e37754861d1b42d651d3664d4ef042b095643f06c4253fae84fe9b359256044d" exitCode=0 Jan 27 14:35:31 crc kubenswrapper[4833]: I0127 14:35:31.911481 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cae6e39-d4ec-4eb5-968d-64120c5e81fb","Type":"ContainerDied","Data":"e37754861d1b42d651d3664d4ef042b095643f06c4253fae84fe9b359256044d"} Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.114189 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.244322 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-config-data\") pod \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.244540 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b96nn\" (UniqueName: \"kubernetes.io/projected/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-kube-api-access-b96nn\") pod \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.244756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-combined-ca-bundle\") pod \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\" (UID: \"4cae6e39-d4ec-4eb5-968d-64120c5e81fb\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.253276 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-kube-api-access-b96nn" (OuterVolumeSpecName: "kube-api-access-b96nn") pod "4cae6e39-d4ec-4eb5-968d-64120c5e81fb" (UID: "4cae6e39-d4ec-4eb5-968d-64120c5e81fb"). InnerVolumeSpecName "kube-api-access-b96nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.280420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-config-data" (OuterVolumeSpecName: "config-data") pod "4cae6e39-d4ec-4eb5-968d-64120c5e81fb" (UID: "4cae6e39-d4ec-4eb5-968d-64120c5e81fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.281547 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cae6e39-d4ec-4eb5-968d-64120c5e81fb" (UID: "4cae6e39-d4ec-4eb5-968d-64120c5e81fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.349125 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b96nn\" (UniqueName: \"kubernetes.io/projected/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-kube-api-access-b96nn\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.349165 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.349176 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cae6e39-d4ec-4eb5-968d-64120c5e81fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.465251 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": dial tcp 10.217.0.215:8775: connect: connection refused" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.465272 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": dial tcp 10.217.0.215:8775: connect: connection refused" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.628172 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.742873 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.754524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-internal-tls-certs\") pod \"c8379b0d-848e-48f1-a61a-7cd40e578281\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.754708 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2jqp\" (UniqueName: \"kubernetes.io/projected/c8379b0d-848e-48f1-a61a-7cd40e578281-kube-api-access-t2jqp\") pod \"c8379b0d-848e-48f1-a61a-7cd40e578281\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.754740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-public-tls-certs\") pod \"c8379b0d-848e-48f1-a61a-7cd40e578281\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.754846 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-config-data\") pod \"c8379b0d-848e-48f1-a61a-7cd40e578281\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.754933 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8379b0d-848e-48f1-a61a-7cd40e578281-logs\") pod \"c8379b0d-848e-48f1-a61a-7cd40e578281\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.755013 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-combined-ca-bundle\") pod \"c8379b0d-848e-48f1-a61a-7cd40e578281\" (UID: \"c8379b0d-848e-48f1-a61a-7cd40e578281\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.757181 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8379b0d-848e-48f1-a61a-7cd40e578281-logs" (OuterVolumeSpecName: "logs") pod "c8379b0d-848e-48f1-a61a-7cd40e578281" (UID: "c8379b0d-848e-48f1-a61a-7cd40e578281"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.785491 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8379b0d-848e-48f1-a61a-7cd40e578281-kube-api-access-t2jqp" (OuterVolumeSpecName: "kube-api-access-t2jqp") pod "c8379b0d-848e-48f1-a61a-7cd40e578281" (UID: "c8379b0d-848e-48f1-a61a-7cd40e578281"). InnerVolumeSpecName "kube-api-access-t2jqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.809529 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-config-data" (OuterVolumeSpecName: "config-data") pod "c8379b0d-848e-48f1-a61a-7cd40e578281" (UID: "c8379b0d-848e-48f1-a61a-7cd40e578281"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.813097 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8379b0d-848e-48f1-a61a-7cd40e578281" (UID: "c8379b0d-848e-48f1-a61a-7cd40e578281"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.839417 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c8379b0d-848e-48f1-a61a-7cd40e578281" (UID: "c8379b0d-848e-48f1-a61a-7cd40e578281"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.856375 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-config-data\") pod \"b41bfcfc-902f-4497-911c-c266554297fe\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.856501 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-combined-ca-bundle\") pod \"b41bfcfc-902f-4497-911c-c266554297fe\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.856595 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7dhx\" (UniqueName: \"kubernetes.io/projected/b41bfcfc-902f-4497-911c-c266554297fe-kube-api-access-f7dhx\") pod \"b41bfcfc-902f-4497-911c-c266554297fe\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.856643 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41bfcfc-902f-4497-911c-c266554297fe-logs\") pod \"b41bfcfc-902f-4497-911c-c266554297fe\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.856737 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-nova-metadata-tls-certs\") pod \"b41bfcfc-902f-4497-911c-c266554297fe\" (UID: \"b41bfcfc-902f-4497-911c-c266554297fe\") " Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.857298 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8379b0d-848e-48f1-a61a-7cd40e578281-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.857317 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.857329 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.857342 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2jqp\" (UniqueName: \"kubernetes.io/projected/c8379b0d-848e-48f1-a61a-7cd40e578281-kube-api-access-t2jqp\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.857352 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.858623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41bfcfc-902f-4497-911c-c266554297fe-logs" (OuterVolumeSpecName: "logs") pod "b41bfcfc-902f-4497-911c-c266554297fe" (UID: "b41bfcfc-902f-4497-911c-c266554297fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.864058 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41bfcfc-902f-4497-911c-c266554297fe-kube-api-access-f7dhx" (OuterVolumeSpecName: "kube-api-access-f7dhx") pod "b41bfcfc-902f-4497-911c-c266554297fe" (UID: "b41bfcfc-902f-4497-911c-c266554297fe"). InnerVolumeSpecName "kube-api-access-f7dhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.874996 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8379b0d-848e-48f1-a61a-7cd40e578281" (UID: "c8379b0d-848e-48f1-a61a-7cd40e578281"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.884821 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b41bfcfc-902f-4497-911c-c266554297fe" (UID: "b41bfcfc-902f-4497-911c-c266554297fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.891789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-config-data" (OuterVolumeSpecName: "config-data") pod "b41bfcfc-902f-4497-911c-c266554297fe" (UID: "b41bfcfc-902f-4497-911c-c266554297fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.923579 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b41bfcfc-902f-4497-911c-c266554297fe" (UID: "b41bfcfc-902f-4497-911c-c266554297fe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.925951 4833 generic.go:334] "Generic (PLEG): container finished" podID="b41bfcfc-902f-4497-911c-c266554297fe" containerID="90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396" exitCode=0 Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.926000 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41bfcfc-902f-4497-911c-c266554297fe","Type":"ContainerDied","Data":"90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396"} Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.926052 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41bfcfc-902f-4497-911c-c266554297fe","Type":"ContainerDied","Data":"a9ba372ee97fcb4fb416d18e8d4b4b3265f61f85a2f6c54da4659b574c30409d"} Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.926072 4833 scope.go:117] "RemoveContainer" containerID="90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.926069 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.936652 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.936677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cae6e39-d4ec-4eb5-968d-64120c5e81fb","Type":"ContainerDied","Data":"13e512cd6c1b4c0f6214c0e5fe17aed13774ea7ce6d11bcf33eca1f2ff097346"} Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.942621 4833 generic.go:334] "Generic (PLEG): container finished" podID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerID="32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d" exitCode=0 Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.942671 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8379b0d-848e-48f1-a61a-7cd40e578281","Type":"ContainerDied","Data":"32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d"} Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.942712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c8379b0d-848e-48f1-a61a-7cd40e578281","Type":"ContainerDied","Data":"1f306902e61653c406153952af2aeeefd1e08924f15dee963cb61c94289e61ad"} Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.942780 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.958667 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41bfcfc-902f-4497-911c-c266554297fe-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.958707 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.958717 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8379b0d-848e-48f1-a61a-7cd40e578281-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.958727 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.958736 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41bfcfc-902f-4497-911c-c266554297fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.958744 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7dhx\" (UniqueName: \"kubernetes.io/projected/b41bfcfc-902f-4497-911c-c266554297fe-kube-api-access-f7dhx\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:32 crc kubenswrapper[4833]: I0127 14:35:32.993573 4833 scope.go:117] "RemoveContainer" containerID="5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.001306 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.021099 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.037329 4833 scope.go:117] "RemoveContainer" containerID="90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.037783 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396\": container with ID starting with 90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396 not found: ID does not exist" containerID="90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.037811 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396"} err="failed to get container status \"90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396\": rpc error: code = NotFound desc = could not find container \"90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396\": container with ID starting with 90cdb5d37648b5d3e7ab7a23031dc081bc01aaf3623148c45f77ea51a7231396 not found: ID does not exist" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.037840 4833 scope.go:117] "RemoveContainer" containerID="5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.038138 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f\": container with ID starting with 5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f not found: ID does not exist" containerID="5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.038161 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f"} err="failed to get container status \"5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f\": rpc error: code = NotFound desc = could not find container \"5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f\": container with ID starting with 5c98f1c80d0aae12d7c8c67e774b52cea2860a58688575feae072edc43cef32f not found: ID does not exist" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.038178 4833 scope.go:117] "RemoveContainer" containerID="e37754861d1b42d651d3664d4ef042b095643f06c4253fae84fe9b359256044d" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.038603 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.050951 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.063746 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.064302 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-log" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064322 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-log" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.064344 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cae6e39-d4ec-4eb5-968d-64120c5e81fb" containerName="nova-scheduler-scheduler" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064352 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cae6e39-d4ec-4eb5-968d-64120c5e81fb" containerName="nova-scheduler-scheduler" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.064363 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211af2ba-ae31-4ecd-9063-b277bffb42b7" containerName="nova-manage" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064371 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="211af2ba-ae31-4ecd-9063-b277bffb42b7" containerName="nova-manage" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.064390 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-metadata" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064397 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-metadata" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.064407 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-api" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064415 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-api" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.064431 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-log" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064439 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-log" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064664 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-log" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064677 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cae6e39-d4ec-4eb5-968d-64120c5e81fb" containerName="nova-scheduler-scheduler" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064687 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" containerName="nova-api-api" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064700 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-metadata" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064711 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41bfcfc-902f-4497-911c-c266554297fe" containerName="nova-metadata-log" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.064724 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="211af2ba-ae31-4ecd-9063-b277bffb42b7" containerName="nova-manage" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.065834 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.069892 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.070168 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.070878 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.075048 4833 scope.go:117] "RemoveContainer" containerID="32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.077978 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.086700 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.111930 4833 scope.go:117] "RemoveContainer" containerID="8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.119939 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.123393 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.129322 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.142304 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.152264 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.154338 4833 scope.go:117] "RemoveContainer" containerID="32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.154801 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d\": container with ID starting with 32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d not found: ID does not exist" containerID="32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.154833 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d"} err="failed to get container status \"32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d\": rpc error: code = NotFound desc = could not find container \"32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d\": container with ID starting with 32104f6fd2bc42a4e846deeedc962ed4b8fe980dc8e6fc1d0fb3e67d6448e19d not found: ID does not exist" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.154854 4833 scope.go:117] "RemoveContainer" containerID="8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4" Jan 27 14:35:33 crc kubenswrapper[4833]: E0127 14:35:33.155250 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4\": container with ID starting with 8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4 not found: ID does not exist" containerID="8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.155268 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4"} err="failed to get container status \"8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4\": rpc error: code = NotFound desc = could not find container \"8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4\": container with ID starting with 8f128709396cf17eaf32fff55cbc7f021ed8e8e901881cf849857fb3f63018b4 not found: ID does not exist" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.160720 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162358 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-config-data\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162412 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162535 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50858935-78fe-4d2d-8390-20a264e996f3-logs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.162719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlz4j\" (UniqueName: \"kubernetes.io/projected/50858935-78fe-4d2d-8390-20a264e996f3-kube-api-access-rlz4j\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.163907 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.164503 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.168597 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.222104 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cae6e39-d4ec-4eb5-968d-64120c5e81fb" path="/var/lib/kubelet/pods/4cae6e39-d4ec-4eb5-968d-64120c5e81fb/volumes" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.222677 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41bfcfc-902f-4497-911c-c266554297fe" path="/var/lib/kubelet/pods/b41bfcfc-902f-4497-911c-c266554297fe/volumes" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.223262 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8379b0d-848e-48f1-a61a-7cd40e578281" path="/var/lib/kubelet/pods/c8379b0d-848e-48f1-a61a-7cd40e578281/volumes" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8z8\" (UniqueName: \"kubernetes.io/projected/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-kube-api-access-wf8z8\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264239 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264275 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkcl\" (UniqueName: \"kubernetes.io/projected/57de8349-e7e0-4ad3-ad99-8ba55b963447-kube-api-access-2bkcl\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264351 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57de8349-e7e0-4ad3-ad99-8ba55b963447-logs\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50858935-78fe-4d2d-8390-20a264e996f3-logs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264476 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-config-data\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlz4j\" (UniqueName: \"kubernetes.io/projected/50858935-78fe-4d2d-8390-20a264e996f3-kube-api-access-rlz4j\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264626 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-config-data\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-config-data\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.264773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.265129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50858935-78fe-4d2d-8390-20a264e996f3-logs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.269891 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.270355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.270722 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.270768 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50858935-78fe-4d2d-8390-20a264e996f3-config-data\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.281593 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlz4j\" (UniqueName: \"kubernetes.io/projected/50858935-78fe-4d2d-8390-20a264e996f3-kube-api-access-rlz4j\") pod \"nova-api-0\" (UID: \"50858935-78fe-4d2d-8390-20a264e996f3\") " pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.366795 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.366948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-config-data\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.367047 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.367066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8z8\" (UniqueName: \"kubernetes.io/projected/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-kube-api-access-wf8z8\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.367087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkcl\" (UniqueName: \"kubernetes.io/projected/57de8349-e7e0-4ad3-ad99-8ba55b963447-kube-api-access-2bkcl\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.367111 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57de8349-e7e0-4ad3-ad99-8ba55b963447-logs\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.367225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.367315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-config-data\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.368015 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57de8349-e7e0-4ad3-ad99-8ba55b963447-logs\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.370349 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.370787 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.373366 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de8349-e7e0-4ad3-ad99-8ba55b963447-config-data\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.373796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-config-data\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.383810 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.387067 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkcl\" (UniqueName: \"kubernetes.io/projected/57de8349-e7e0-4ad3-ad99-8ba55b963447-kube-api-access-2bkcl\") pod \"nova-metadata-0\" (UID: \"57de8349-e7e0-4ad3-ad99-8ba55b963447\") " pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.390064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8z8\" (UniqueName: \"kubernetes.io/projected/08412a30-4bd0-44dc-a2c8-09db6cf3fc9a-kube-api-access-wf8z8\") pod \"nova-scheduler-0\" (UID: \"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a\") " pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.405714 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.443605 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.488414 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:35:33 crc kubenswrapper[4833]: W0127 14:35:33.980714 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50858935_78fe_4d2d_8390_20a264e996f3.slice/crio-95c6f66a79bf4fb66cdf663b99a980c2a1cb09102127279ba967d40ca69b6d72 WatchSource:0}: Error finding container 95c6f66a79bf4fb66cdf663b99a980c2a1cb09102127279ba967d40ca69b6d72: Status 404 returned error can't find the container with id 95c6f66a79bf4fb66cdf663b99a980c2a1cb09102127279ba967d40ca69b6d72 Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.981475 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:35:33 crc kubenswrapper[4833]: W0127 14:35:33.985100 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08412a30_4bd0_44dc_a2c8_09db6cf3fc9a.slice/crio-def26113337e32e88ece6ab2654e46756e55c959a6b7ff507d986ff695648975 WatchSource:0}: Error finding container def26113337e32e88ece6ab2654e46756e55c959a6b7ff507d986ff695648975: Status 404 returned error can't find the container with id def26113337e32e88ece6ab2654e46756e55c959a6b7ff507d986ff695648975 Jan 27 14:35:33 crc kubenswrapper[4833]: I0127 14:35:33.993175 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:35:34 crc kubenswrapper[4833]: W0127 14:35:34.074949 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57de8349_e7e0_4ad3_ad99_8ba55b963447.slice/crio-cded79540d69db02b2866ab5d4c0c2ba6a85fb607b52acf8c36c3cc96168887e WatchSource:0}: Error finding container cded79540d69db02b2866ab5d4c0c2ba6a85fb607b52acf8c36c3cc96168887e: Status 404 returned error can't find the container with id cded79540d69db02b2866ab5d4c0c2ba6a85fb607b52acf8c36c3cc96168887e Jan 27 14:35:34 crc kubenswrapper[4833]: I0127 14:35:34.077242 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:35:34 crc kubenswrapper[4833]: I0127 14:35:34.970994 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50858935-78fe-4d2d-8390-20a264e996f3","Type":"ContainerStarted","Data":"e5f1b0686e893c679117bd20dde25874b759c3bb6015f4863f027e2a3044eae6"} Jan 27 14:35:34 crc kubenswrapper[4833]: I0127 14:35:34.971408 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50858935-78fe-4d2d-8390-20a264e996f3","Type":"ContainerStarted","Data":"83d4fd2c6bd441ebfd2034682f8d281e749e5b21c03f40e1c0954d7770f2719c"} Jan 27 14:35:34 crc kubenswrapper[4833]: I0127 14:35:34.971432 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50858935-78fe-4d2d-8390-20a264e996f3","Type":"ContainerStarted","Data":"95c6f66a79bf4fb66cdf663b99a980c2a1cb09102127279ba967d40ca69b6d72"} Jan 27 14:35:34 crc kubenswrapper[4833]: I0127 14:35:34.982289 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a","Type":"ContainerStarted","Data":"499e36296fa275a12e79f41d12608fd27d8db67035c3ffcc4a5fb622040c276e"} Jan 27 14:35:34 crc kubenswrapper[4833]: I0127 14:35:34.982345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08412a30-4bd0-44dc-a2c8-09db6cf3fc9a","Type":"ContainerStarted","Data":"def26113337e32e88ece6ab2654e46756e55c959a6b7ff507d986ff695648975"} Jan 27 14:35:35 crc kubenswrapper[4833]: I0127 14:35:35.004048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57de8349-e7e0-4ad3-ad99-8ba55b963447","Type":"ContainerStarted","Data":"dd4159f9c4538384890cf68702a2e9910db0d22a0a8f0890ad111003e98cacee"} Jan 27 14:35:35 crc kubenswrapper[4833]: I0127 14:35:35.004087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57de8349-e7e0-4ad3-ad99-8ba55b963447","Type":"ContainerStarted","Data":"c8789cf6d9d554b7f3454362db4aa20a976283bb67bd9d0ed1827e6071d70a0b"} Jan 27 14:35:35 crc kubenswrapper[4833]: I0127 14:35:35.004098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57de8349-e7e0-4ad3-ad99-8ba55b963447","Type":"ContainerStarted","Data":"cded79540d69db02b2866ab5d4c0c2ba6a85fb607b52acf8c36c3cc96168887e"} Jan 27 14:35:35 crc kubenswrapper[4833]: I0127 14:35:35.030587 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.030399465 podStartE2EDuration="2.030399465s" podCreationTimestamp="2026-01-27 14:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:35.005006178 +0000 UTC m=+1436.656330580" watchObservedRunningTime="2026-01-27 14:35:35.030399465 +0000 UTC m=+1436.681723897" Jan 27 14:35:35 crc kubenswrapper[4833]: I0127 14:35:35.035539 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.035528398 podStartE2EDuration="2.035528398s" podCreationTimestamp="2026-01-27 14:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:35.02476213 +0000 UTC m=+1436.676086532" watchObservedRunningTime="2026-01-27 14:35:35.035528398 +0000 UTC m=+1436.686852810" Jan 27 14:35:35 crc kubenswrapper[4833]: I0127 14:35:35.070802 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.070781212 podStartE2EDuration="2.070781212s" podCreationTimestamp="2026-01-27 14:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:35.051016399 +0000 UTC m=+1436.702340811" watchObservedRunningTime="2026-01-27 14:35:35.070781212 +0000 UTC m=+1436.722105624" Jan 27 14:35:38 crc kubenswrapper[4833]: I0127 14:35:38.444557 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:35:38 crc kubenswrapper[4833]: I0127 14:35:38.488862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:35:38 crc kubenswrapper[4833]: I0127 14:35:38.488935 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:35:40 crc kubenswrapper[4833]: I0127 14:35:40.858421 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:35:43 crc kubenswrapper[4833]: I0127 14:35:43.407341 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:35:43 crc kubenswrapper[4833]: I0127 14:35:43.408995 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:35:43 crc kubenswrapper[4833]: I0127 14:35:43.444760 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:35:43 crc kubenswrapper[4833]: I0127 14:35:43.486104 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:35:43 crc kubenswrapper[4833]: I0127 14:35:43.489887 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:35:43 crc kubenswrapper[4833]: I0127 14:35:43.489951 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:35:44 crc kubenswrapper[4833]: I0127 14:35:44.166639 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:35:44 crc kubenswrapper[4833]: I0127 14:35:44.423601 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="50858935-78fe-4d2d-8390-20a264e996f3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:35:44 crc kubenswrapper[4833]: I0127 14:35:44.423637 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="50858935-78fe-4d2d-8390-20a264e996f3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:35:44 crc kubenswrapper[4833]: I0127 14:35:44.504593 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="57de8349-e7e0-4ad3-ad99-8ba55b963447" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:35:44 crc kubenswrapper[4833]: I0127 14:35:44.504615 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="57de8349-e7e0-4ad3-ad99-8ba55b963447" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.418496 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.421003 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.423032 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.428721 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.494118 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.494220 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.498410 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:35:53 crc kubenswrapper[4833]: I0127 14:35:53.499586 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:35:54 crc kubenswrapper[4833]: I0127 14:35:54.263464 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:35:54 crc kubenswrapper[4833]: I0127 14:35:54.272294 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.712516 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hk97g"] Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.734902 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.741225 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hk97g"] Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.864038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51526a16-f6c1-4cda-935a-49ad10c53a33-catalog-content\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.864079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bjw\" (UniqueName: \"kubernetes.io/projected/51526a16-f6c1-4cda-935a-49ad10c53a33-kube-api-access-k8bjw\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.864172 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51526a16-f6c1-4cda-935a-49ad10c53a33-utilities\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.966275 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51526a16-f6c1-4cda-935a-49ad10c53a33-utilities\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.966485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51526a16-f6c1-4cda-935a-49ad10c53a33-catalog-content\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.966517 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8bjw\" (UniqueName: \"kubernetes.io/projected/51526a16-f6c1-4cda-935a-49ad10c53a33-kube-api-access-k8bjw\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.967005 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51526a16-f6c1-4cda-935a-49ad10c53a33-utilities\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.967020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51526a16-f6c1-4cda-935a-49ad10c53a33-catalog-content\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:55 crc kubenswrapper[4833]: I0127 14:35:55.995330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8bjw\" (UniqueName: \"kubernetes.io/projected/51526a16-f6c1-4cda-935a-49ad10c53a33-kube-api-access-k8bjw\") pod \"certified-operators-hk97g\" (UID: \"51526a16-f6c1-4cda-935a-49ad10c53a33\") " pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:56 crc kubenswrapper[4833]: I0127 14:35:56.069356 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:35:57 crc kubenswrapper[4833]: I0127 14:35:56.621115 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hk97g"] Jan 27 14:35:57 crc kubenswrapper[4833]: I0127 14:35:57.302356 4833 generic.go:334] "Generic (PLEG): container finished" podID="51526a16-f6c1-4cda-935a-49ad10c53a33" containerID="4075fc70eca174e20065d06716468d0891e944a38c4f2dd7bbe9c2ebd5d6d492" exitCode=0 Jan 27 14:35:57 crc kubenswrapper[4833]: I0127 14:35:57.302568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk97g" event={"ID":"51526a16-f6c1-4cda-935a-49ad10c53a33","Type":"ContainerDied","Data":"4075fc70eca174e20065d06716468d0891e944a38c4f2dd7bbe9c2ebd5d6d492"} Jan 27 14:35:57 crc kubenswrapper[4833]: I0127 14:35:57.302664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk97g" event={"ID":"51526a16-f6c1-4cda-935a-49ad10c53a33","Type":"ContainerStarted","Data":"7a62f3fe26d15f0806d6b807a38d1200b6011c2abf8064eabd12750c0892f7da"} Jan 27 14:36:01 crc kubenswrapper[4833]: I0127 14:36:01.341612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk97g" event={"ID":"51526a16-f6c1-4cda-935a-49ad10c53a33","Type":"ContainerStarted","Data":"f266634dfba7d5968dcbd395862a2a1a67cc6f75d5d05e9eef76eb5829d581c0"} Jan 27 14:36:02 crc kubenswrapper[4833]: I0127 14:36:02.353165 4833 generic.go:334] "Generic (PLEG): container finished" podID="51526a16-f6c1-4cda-935a-49ad10c53a33" containerID="f266634dfba7d5968dcbd395862a2a1a67cc6f75d5d05e9eef76eb5829d581c0" exitCode=0 Jan 27 14:36:02 crc kubenswrapper[4833]: I0127 14:36:02.353231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk97g" event={"ID":"51526a16-f6c1-4cda-935a-49ad10c53a33","Type":"ContainerDied","Data":"f266634dfba7d5968dcbd395862a2a1a67cc6f75d5d05e9eef76eb5829d581c0"} Jan 27 14:36:02 crc kubenswrapper[4833]: I0127 14:36:02.353500 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hk97g" event={"ID":"51526a16-f6c1-4cda-935a-49ad10c53a33","Type":"ContainerStarted","Data":"879555c3988f5262d62eac4f75add1c7ae29ff828f8ebaab31c762465b38bf22"} Jan 27 14:36:02 crc kubenswrapper[4833]: I0127 14:36:02.374212 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hk97g" podStartSLOduration=2.762404574 podStartE2EDuration="7.374193604s" podCreationTimestamp="2026-01-27 14:35:55 +0000 UTC" firstStartedPulling="2026-01-27 14:35:57.304844272 +0000 UTC m=+1458.956168684" lastFinishedPulling="2026-01-27 14:36:01.916633302 +0000 UTC m=+1463.567957714" observedRunningTime="2026-01-27 14:36:02.371759566 +0000 UTC m=+1464.023083988" watchObservedRunningTime="2026-01-27 14:36:02.374193604 +0000 UTC m=+1464.025518006" Jan 27 14:36:02 crc kubenswrapper[4833]: I0127 14:36:02.707043 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:36:03 crc kubenswrapper[4833]: I0127 14:36:03.666208 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:36:06 crc kubenswrapper[4833]: I0127 14:36:06.070259 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:36:06 crc kubenswrapper[4833]: I0127 14:36:06.070325 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:36:06 crc kubenswrapper[4833]: I0127 14:36:06.118182 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:36:06 crc kubenswrapper[4833]: I0127 14:36:06.927295 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerName="rabbitmq" containerID="cri-o://f2c2ac8c4b8f5f8a2738bf99f826e32f49d5267d2a4f1954669c0a2a08017808" gracePeriod=604796 Jan 27 14:36:08 crc kubenswrapper[4833]: I0127 14:36:08.068846 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="rabbitmq" containerID="cri-o://baefb3371f995e5614217f2f79e7b17f9c5402013bdb96b3d067d8592c2136fa" gracePeriod=604796 Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.469018 4833 generic.go:334] "Generic (PLEG): container finished" podID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerID="f2c2ac8c4b8f5f8a2738bf99f826e32f49d5267d2a4f1954669c0a2a08017808" exitCode=0 Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.469088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b143505-7ef8-4e88-b977-8fc8e3471474","Type":"ContainerDied","Data":"f2c2ac8c4b8f5f8a2738bf99f826e32f49d5267d2a4f1954669c0a2a08017808"} Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.469862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9b143505-7ef8-4e88-b977-8fc8e3471474","Type":"ContainerDied","Data":"31b253b2b1c2e0b6d64c16d152ba94aa9fd26269dd18f8a38cb93fa23543f067"} Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.469882 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31b253b2b1c2e0b6d64c16d152ba94aa9fd26269dd18f8a38cb93fa23543f067" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.513279 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623658 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-config-data\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623723 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b143505-7ef8-4e88-b977-8fc8e3471474-pod-info\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjkv\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-kube-api-access-gkjkv\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-confd\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-server-conf\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623939 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-plugins-conf\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.623991 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-erlang-cookie\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.624031 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b143505-7ef8-4e88-b977-8fc8e3471474-erlang-cookie-secret\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.624046 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-tls\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.624061 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.624095 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-plugins\") pod \"9b143505-7ef8-4e88-b977-8fc8e3471474\" (UID: \"9b143505-7ef8-4e88-b977-8fc8e3471474\") " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.627279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.630481 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.634632 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.634947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9b143505-7ef8-4e88-b977-8fc8e3471474-pod-info" (OuterVolumeSpecName: "pod-info") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.635545 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b143505-7ef8-4e88-b977-8fc8e3471474-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.638363 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-kube-api-access-gkjkv" (OuterVolumeSpecName: "kube-api-access-gkjkv") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "kube-api-access-gkjkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.638805 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.648009 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.689016 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-config-data" (OuterVolumeSpecName: "config-data") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.723427 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-server-conf" (OuterVolumeSpecName: "server-conf") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726254 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726411 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726508 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726565 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9b143505-7ef8-4e88-b977-8fc8e3471474-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726615 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726673 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726720 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726767 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9b143505-7ef8-4e88-b977-8fc8e3471474-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726817 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9b143505-7ef8-4e88-b977-8fc8e3471474-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.726863 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjkv\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-kube-api-access-gkjkv\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.754487 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.769663 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9b143505-7ef8-4e88-b977-8fc8e3471474" (UID: "9b143505-7ef8-4e88-b977-8fc8e3471474"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.800869 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.831234 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:13 crc kubenswrapper[4833]: I0127 14:36:13.831316 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9b143505-7ef8-4e88-b977-8fc8e3471474-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.484288 4833 generic.go:334] "Generic (PLEG): container finished" podID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerID="baefb3371f995e5614217f2f79e7b17f9c5402013bdb96b3d067d8592c2136fa" exitCode=0 Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.484691 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.486574 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea","Type":"ContainerDied","Data":"baefb3371f995e5614217f2f79e7b17f9c5402013bdb96b3d067d8592c2136fa"} Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.526354 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.537306 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.634493 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:36:14 crc kubenswrapper[4833]: E0127 14:36:14.634933 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerName="setup-container" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.634945 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerName="setup-container" Jan 27 14:36:14 crc kubenswrapper[4833]: E0127 14:36:14.634989 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerName="rabbitmq" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.634997 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerName="rabbitmq" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.635166 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" containerName="rabbitmq" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.636230 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.642012 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.642208 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.642336 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.642585 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.642754 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.642931 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bpn7b" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.675985 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.737714 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.795585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.795800 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.795849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3d13b73-8eab-4c26-abe0-bdda094d795b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.795943 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.795971 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.796105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3d13b73-8eab-4c26-abe0-bdda094d795b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.796153 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.796195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.796270 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.796322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsbgw\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-kube-api-access-xsbgw\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.796350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.857503 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsbgw\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-kube-api-access-xsbgw\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898341 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898392 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898481 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3d13b73-8eab-4c26-abe0-bdda094d795b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898550 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898565 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3d13b73-8eab-4c26-abe0-bdda094d795b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898638 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.898668 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.899124 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.899386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-config-data\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.899708 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.902541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.902895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b3d13b73-8eab-4c26-abe0-bdda094d795b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.903957 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.905516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.923604 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b3d13b73-8eab-4c26-abe0-bdda094d795b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.929356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b3d13b73-8eab-4c26-abe0-bdda094d795b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.933304 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.935659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsbgw\" (UniqueName: \"kubernetes.io/projected/b3d13b73-8eab-4c26-abe0-bdda094d795b-kube-api-access-xsbgw\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:14 crc kubenswrapper[4833]: I0127 14:36:14.968845 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b3d13b73-8eab-4c26-abe0-bdda094d795b\") " pod="openstack/rabbitmq-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001372 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-server-conf\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-tls\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001475 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrspt\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-kube-api-access-vrspt\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001542 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-erlang-cookie\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001574 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-plugins\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001636 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-plugins-conf\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-confd\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001737 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-pod-info\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001765 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-erlang-cookie-secret\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001824 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-config-data\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.001960 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\" (UID: \"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea\") " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.007499 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.010137 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.011387 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.011903 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-pod-info" (OuterVolumeSpecName: "pod-info") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.013545 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.014684 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.016503 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.025178 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-kube-api-access-vrspt" (OuterVolumeSpecName: "kube-api-access-vrspt") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "kube-api-access-vrspt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.025626 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.040763 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-config-data" (OuterVolumeSpecName: "config-data") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.065746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-server-conf" (OuterVolumeSpecName: "server-conf") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105077 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105117 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105137 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105167 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105181 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105192 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105202 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrspt\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-kube-api-access-vrspt\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105210 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105218 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.105225 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.136028 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" (UID: "f06b8bd8-3e5e-4dfb-9c28-562cb6874dea"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.152018 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.208195 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.208230 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.228832 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b143505-7ef8-4e88-b977-8fc8e3471474" path="/var/lib/kubelet/pods/9b143505-7ef8-4e88-b977-8fc8e3471474/volumes" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.497458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f06b8bd8-3e5e-4dfb-9c28-562cb6874dea","Type":"ContainerDied","Data":"deda8311b2cb01f19f425ccdda0ca2010357950ffaccd8108de2e5c724531ac3"} Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.497911 4833 scope.go:117] "RemoveContainer" containerID="baefb3371f995e5614217f2f79e7b17f9c5402013bdb96b3d067d8592c2136fa" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.497628 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.527564 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.541657 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.557307 4833 scope.go:117] "RemoveContainer" containerID="ebad24caa0ef0da71ac712df5c8ba0c36956bda6947f1bbc61a6a1deb89786ee" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.560545 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:36:15 crc kubenswrapper[4833]: E0127 14:36:15.560967 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="setup-container" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.560991 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="setup-container" Jan 27 14:36:15 crc kubenswrapper[4833]: E0127 14:36:15.561034 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="rabbitmq" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.561042 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="rabbitmq" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.561256 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" containerName="rabbitmq" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.573054 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.578928 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.585984 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.586076 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.586350 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.586430 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.586511 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.586716 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.586822 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tgbfx" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.591682 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.722915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d23272-4189-49d5-9df5-e7347a122434-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d23272-4189-49d5-9df5-e7347a122434-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723327 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723433 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723575 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723686 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723863 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96s2d\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-kube-api-access-96s2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.723994 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.724091 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.724176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.825802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.825857 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.825885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.825963 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d23272-4189-49d5-9df5-e7347a122434-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826042 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d23272-4189-49d5-9df5-e7347a122434-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826154 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826191 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.826228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96s2d\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-kube-api-access-96s2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.827233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.827649 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.827234 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.828890 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.829027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.830391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90d23272-4189-49d5-9df5-e7347a122434-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.830471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90d23272-4189-49d5-9df5-e7347a122434-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.832607 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90d23272-4189-49d5-9df5-e7347a122434-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.832898 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.832902 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.844658 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96s2d\" (UniqueName: \"kubernetes.io/projected/90d23272-4189-49d5-9df5-e7347a122434-kube-api-access-96s2d\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:15 crc kubenswrapper[4833]: I0127 14:36:15.879213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"90d23272-4189-49d5-9df5-e7347a122434\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.087013 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.121061 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hk97g" Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.212587 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hk97g"] Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.251099 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8vfp"] Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.251372 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8vfp" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="registry-server" containerID="cri-o://d1218809ebe407d0cc6fb0f2ffb771136ce8b2ac9470b5890c5fcfc2eda7e994" gracePeriod=2 Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.511748 4833 generic.go:334] "Generic (PLEG): container finished" podID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerID="d1218809ebe407d0cc6fb0f2ffb771136ce8b2ac9470b5890c5fcfc2eda7e994" exitCode=0 Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.511814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerDied","Data":"d1218809ebe407d0cc6fb0f2ffb771136ce8b2ac9470b5890c5fcfc2eda7e994"} Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.517505 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3d13b73-8eab-4c26-abe0-bdda094d795b","Type":"ContainerStarted","Data":"39e010e2c974adfaa22c77730d6b9d72e72d232aa93633dd4fc427a742c9adb8"} Jan 27 14:36:16 crc kubenswrapper[4833]: I0127 14:36:16.608427 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:36:16 crc kubenswrapper[4833]: W0127 14:36:16.616504 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d23272_4189_49d5_9df5_e7347a122434.slice/crio-2894ae5b3c747383dc1a4a23fb6b07c485bd0912666620fd0247da46d8f44309 WatchSource:0}: Error finding container 2894ae5b3c747383dc1a4a23fb6b07c485bd0912666620fd0247da46d8f44309: Status 404 returned error can't find the container with id 2894ae5b3c747383dc1a4a23fb6b07c485bd0912666620fd0247da46d8f44309 Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.086377 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.168321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-utilities\") pod \"debecd7b-5b83-4347-9c4f-bb33d20975e5\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.168404 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-catalog-content\") pod \"debecd7b-5b83-4347-9c4f-bb33d20975e5\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.168437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7kgm\" (UniqueName: \"kubernetes.io/projected/debecd7b-5b83-4347-9c4f-bb33d20975e5-kube-api-access-q7kgm\") pod \"debecd7b-5b83-4347-9c4f-bb33d20975e5\" (UID: \"debecd7b-5b83-4347-9c4f-bb33d20975e5\") " Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.170400 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-utilities" (OuterVolumeSpecName: "utilities") pod "debecd7b-5b83-4347-9c4f-bb33d20975e5" (UID: "debecd7b-5b83-4347-9c4f-bb33d20975e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.179218 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debecd7b-5b83-4347-9c4f-bb33d20975e5-kube-api-access-q7kgm" (OuterVolumeSpecName: "kube-api-access-q7kgm") pod "debecd7b-5b83-4347-9c4f-bb33d20975e5" (UID: "debecd7b-5b83-4347-9c4f-bb33d20975e5"). InnerVolumeSpecName "kube-api-access-q7kgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.222927 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06b8bd8-3e5e-4dfb-9c28-562cb6874dea" path="/var/lib/kubelet/pods/f06b8bd8-3e5e-4dfb-9c28-562cb6874dea/volumes" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.270671 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.270707 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7kgm\" (UniqueName: \"kubernetes.io/projected/debecd7b-5b83-4347-9c4f-bb33d20975e5-kube-api-access-q7kgm\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.279709 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "debecd7b-5b83-4347-9c4f-bb33d20975e5" (UID: "debecd7b-5b83-4347-9c4f-bb33d20975e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.373006 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/debecd7b-5b83-4347-9c4f-bb33d20975e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.530140 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8vfp" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.530138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8vfp" event={"ID":"debecd7b-5b83-4347-9c4f-bb33d20975e5","Type":"ContainerDied","Data":"e259eac1aef3c7a57233402f125aef467a869d0b7f654a3dc6ed545da08803fc"} Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.530187 4833 scope.go:117] "RemoveContainer" containerID="d1218809ebe407d0cc6fb0f2ffb771136ce8b2ac9470b5890c5fcfc2eda7e994" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.532180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3d13b73-8eab-4c26-abe0-bdda094d795b","Type":"ContainerStarted","Data":"f501c380035f2d564771d200d6fada1dbddc209908d9ef78a16659bea39b334e"} Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.533978 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90d23272-4189-49d5-9df5-e7347a122434","Type":"ContainerStarted","Data":"2894ae5b3c747383dc1a4a23fb6b07c485bd0912666620fd0247da46d8f44309"} Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.567946 4833 scope.go:117] "RemoveContainer" containerID="63942fa7869c24c73d377c2535deeffdc53e82fa4131a364d5302fc5e72eb3a3" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.597773 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8vfp"] Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.610432 4833 scope.go:117] "RemoveContainer" containerID="d2f4fe6d3b2ca37d383b673c3a42ca32e44c3e22354a8fb9242f6c0e46d85047" Jan 27 14:36:17 crc kubenswrapper[4833]: I0127 14:36:17.613535 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8vfp"] Jan 27 14:36:18 crc kubenswrapper[4833]: I0127 14:36:18.546842 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90d23272-4189-49d5-9df5-e7347a122434","Type":"ContainerStarted","Data":"9409aa6682c35d521dfe5600274c456c313dd84c8e55aa0680b736c558a9732e"} Jan 27 14:36:19 crc kubenswrapper[4833]: I0127 14:36:19.229122 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" path="/var/lib/kubelet/pods/debecd7b-5b83-4347-9c4f-bb33d20975e5/volumes" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.869373 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vsmdk"] Jan 27 14:36:24 crc kubenswrapper[4833]: E0127 14:36:24.870188 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="extract-utilities" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.870200 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="extract-utilities" Jan 27 14:36:24 crc kubenswrapper[4833]: E0127 14:36:24.870211 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="registry-server" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.870216 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="registry-server" Jan 27 14:36:24 crc kubenswrapper[4833]: E0127 14:36:24.870236 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="extract-content" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.870242 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="extract-content" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.870421 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="debecd7b-5b83-4347-9c4f-bb33d20975e5" containerName="registry-server" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.871565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.875364 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.893087 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vsmdk"] Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.937161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.937467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-config\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.937586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.937704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-svc\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.937941 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.938224 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:24 crc kubenswrapper[4833]: I0127 14:36:24.938264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzb2w\" (UniqueName: \"kubernetes.io/projected/ecea3d2e-5a80-4c7e-854b-73b3555c6955-kube-api-access-dzb2w\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzb2w\" (UniqueName: \"kubernetes.io/projected/ecea3d2e-5a80-4c7e-854b-73b3555c6955-kube-api-access-dzb2w\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-config\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040324 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-svc\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.040375 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.041093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.041186 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.041235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.041278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-svc\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.041485 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.041535 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-config\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.063337 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzb2w\" (UniqueName: \"kubernetes.io/projected/ecea3d2e-5a80-4c7e-854b-73b3555c6955-kube-api-access-dzb2w\") pod \"dnsmasq-dns-5576978c7c-vsmdk\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.192240 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:25 crc kubenswrapper[4833]: I0127 14:36:25.677863 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vsmdk"] Jan 27 14:36:26 crc kubenswrapper[4833]: I0127 14:36:26.615330 4833 generic.go:334] "Generic (PLEG): container finished" podID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerID="8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15" exitCode=0 Jan 27 14:36:26 crc kubenswrapper[4833]: I0127 14:36:26.615412 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" event={"ID":"ecea3d2e-5a80-4c7e-854b-73b3555c6955","Type":"ContainerDied","Data":"8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15"} Jan 27 14:36:26 crc kubenswrapper[4833]: I0127 14:36:26.615915 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" event={"ID":"ecea3d2e-5a80-4c7e-854b-73b3555c6955","Type":"ContainerStarted","Data":"bca6037aa421b4f0234d4dae1c3fa1df0db2d825c437306aaafd38e543808e7c"} Jan 27 14:36:27 crc kubenswrapper[4833]: I0127 14:36:27.631552 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" event={"ID":"ecea3d2e-5a80-4c7e-854b-73b3555c6955","Type":"ContainerStarted","Data":"d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e"} Jan 27 14:36:27 crc kubenswrapper[4833]: I0127 14:36:27.631868 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:27 crc kubenswrapper[4833]: I0127 14:36:27.661886 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" podStartSLOduration=3.661867555 podStartE2EDuration="3.661867555s" podCreationTimestamp="2026-01-27 14:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:27.66166936 +0000 UTC m=+1489.312993772" watchObservedRunningTime="2026-01-27 14:36:27.661867555 +0000 UTC m=+1489.313191967" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.195977 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.274007 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-2cdld"] Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.274336 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" containerName="dnsmasq-dns" containerID="cri-o://d069df7882429b82ff956a7fe3e3fc9d6730c736d8d69f57c3a40b3c584b6323" gracePeriod=10 Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.466372 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8665b49f-vcpxm"] Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.471714 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.481974 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8665b49f-vcpxm"] Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595463 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84c8m\" (UniqueName: \"kubernetes.io/projected/a4b72676-a66b-4a41-80c1-1c634debf0ac-kube-api-access-84c8m\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595800 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595860 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595887 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-config\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595977 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.595993 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-dns-svc\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698034 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698142 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-config\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-dns-svc\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698317 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84c8m\" (UniqueName: \"kubernetes.io/projected/a4b72676-a66b-4a41-80c1-1c634debf0ac-kube-api-access-84c8m\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.698369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.699058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.699116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-config\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.699196 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-ovsdbserver-sb\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.701932 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.702388 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-dns-svc\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.704146 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b72676-a66b-4a41-80c1-1c634debf0ac-dns-swift-storage-0\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.729144 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84c8m\" (UniqueName: \"kubernetes.io/projected/a4b72676-a66b-4a41-80c1-1c634debf0ac-kube-api-access-84c8m\") pod \"dnsmasq-dns-7c8665b49f-vcpxm\" (UID: \"a4b72676-a66b-4a41-80c1-1c634debf0ac\") " pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.730264 4833 generic.go:334] "Generic (PLEG): container finished" podID="b040472d-4a33-4442-bacd-23c8a68983c9" containerID="d069df7882429b82ff956a7fe3e3fc9d6730c736d8d69f57c3a40b3c584b6323" exitCode=0 Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.730305 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" event={"ID":"b040472d-4a33-4442-bacd-23c8a68983c9","Type":"ContainerDied","Data":"d069df7882429b82ff956a7fe3e3fc9d6730c736d8d69f57c3a40b3c584b6323"} Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.730330 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" event={"ID":"b040472d-4a33-4442-bacd-23c8a68983c9","Type":"ContainerDied","Data":"e436b5aca26a298a60b98ec925602b6de9e8849e5901a250aabdef355b64e238"} Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.730339 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e436b5aca26a298a60b98ec925602b6de9e8849e5901a250aabdef355b64e238" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.789910 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.804406 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.906868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-config\") pod \"b040472d-4a33-4442-bacd-23c8a68983c9\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.906921 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-nb\") pod \"b040472d-4a33-4442-bacd-23c8a68983c9\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.906966 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9gjf\" (UniqueName: \"kubernetes.io/projected/b040472d-4a33-4442-bacd-23c8a68983c9-kube-api-access-s9gjf\") pod \"b040472d-4a33-4442-bacd-23c8a68983c9\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.907313 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-swift-storage-0\") pod \"b040472d-4a33-4442-bacd-23c8a68983c9\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.907437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-sb\") pod \"b040472d-4a33-4442-bacd-23c8a68983c9\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.911728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b040472d-4a33-4442-bacd-23c8a68983c9-kube-api-access-s9gjf" (OuterVolumeSpecName: "kube-api-access-s9gjf") pod "b040472d-4a33-4442-bacd-23c8a68983c9" (UID: "b040472d-4a33-4442-bacd-23c8a68983c9"). InnerVolumeSpecName "kube-api-access-s9gjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.975341 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b040472d-4a33-4442-bacd-23c8a68983c9" (UID: "b040472d-4a33-4442-bacd-23c8a68983c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.983101 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b040472d-4a33-4442-bacd-23c8a68983c9" (UID: "b040472d-4a33-4442-bacd-23c8a68983c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.986942 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b040472d-4a33-4442-bacd-23c8a68983c9" (UID: "b040472d-4a33-4442-bacd-23c8a68983c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:35 crc kubenswrapper[4833]: I0127 14:36:35.994102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-config" (OuterVolumeSpecName: "config") pod "b040472d-4a33-4442-bacd-23c8a68983c9" (UID: "b040472d-4a33-4442-bacd-23c8a68983c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.010594 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-svc\") pod \"b040472d-4a33-4442-bacd-23c8a68983c9\" (UID: \"b040472d-4a33-4442-bacd-23c8a68983c9\") " Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.011938 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.011966 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.012020 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.012037 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.012051 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9gjf\" (UniqueName: \"kubernetes.io/projected/b040472d-4a33-4442-bacd-23c8a68983c9-kube-api-access-s9gjf\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.073309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b040472d-4a33-4442-bacd-23c8a68983c9" (UID: "b040472d-4a33-4442-bacd-23c8a68983c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.115545 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b040472d-4a33-4442-bacd-23c8a68983c9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.346527 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8665b49f-vcpxm"] Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.742169 4833 generic.go:334] "Generic (PLEG): container finished" podID="a4b72676-a66b-4a41-80c1-1c634debf0ac" containerID="b2aa375fd2a6efc7c1c8030da10acf87290a018715bb347629a7f57330142de6" exitCode=0 Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.742223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" event={"ID":"a4b72676-a66b-4a41-80c1-1c634debf0ac","Type":"ContainerDied","Data":"b2aa375fd2a6efc7c1c8030da10acf87290a018715bb347629a7f57330142de6"} Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.742260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" event={"ID":"a4b72676-a66b-4a41-80c1-1c634debf0ac","Type":"ContainerStarted","Data":"a8152e066f1a9d466d6392a1c4448aa0872445c683413c8283820d6b09b6ae6a"} Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.742272 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-2cdld" Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.947409 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-2cdld"] Jan 27 14:36:36 crc kubenswrapper[4833]: I0127 14:36:36.957435 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-2cdld"] Jan 27 14:36:37 crc kubenswrapper[4833]: I0127 14:36:37.222270 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" path="/var/lib/kubelet/pods/b040472d-4a33-4442-bacd-23c8a68983c9/volumes" Jan 27 14:36:37 crc kubenswrapper[4833]: I0127 14:36:37.755286 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" event={"ID":"a4b72676-a66b-4a41-80c1-1c634debf0ac","Type":"ContainerStarted","Data":"619a66dad99082926fe2411f55f2c89cc381dfac538955585298b370535fdb7c"} Jan 27 14:36:37 crc kubenswrapper[4833]: I0127 14:36:37.755598 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:45 crc kubenswrapper[4833]: I0127 14:36:45.806637 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" Jan 27 14:36:45 crc kubenswrapper[4833]: I0127 14:36:45.838470 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c8665b49f-vcpxm" podStartSLOduration=10.838422589 podStartE2EDuration="10.838422589s" podCreationTimestamp="2026-01-27 14:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:37.789523085 +0000 UTC m=+1499.440847517" watchObservedRunningTime="2026-01-27 14:36:45.838422589 +0000 UTC m=+1507.489747021" Jan 27 14:36:45 crc kubenswrapper[4833]: I0127 14:36:45.875033 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vsmdk"] Jan 27 14:36:45 crc kubenswrapper[4833]: I0127 14:36:45.875323 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerName="dnsmasq-dns" containerID="cri-o://d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e" gracePeriod=10 Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.372653 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.443252 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-swift-storage-0\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.443293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-sb\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.521586 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.526854 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544279 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-config\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544312 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-openstack-edpm-ipam\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-nb\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544403 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-svc\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544483 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzb2w\" (UniqueName: \"kubernetes.io/projected/ecea3d2e-5a80-4c7e-854b-73b3555c6955-kube-api-access-dzb2w\") pod \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\" (UID: \"ecea3d2e-5a80-4c7e-854b-73b3555c6955\") " Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544845 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.544859 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.548424 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecea3d2e-5a80-4c7e-854b-73b3555c6955-kube-api-access-dzb2w" (OuterVolumeSpecName: "kube-api-access-dzb2w") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "kube-api-access-dzb2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.596584 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.601360 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.606050 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-config" (OuterVolumeSpecName: "config") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.626694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ecea3d2e-5a80-4c7e-854b-73b3555c6955" (UID: "ecea3d2e-5a80-4c7e-854b-73b3555c6955"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.646236 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.646264 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.646277 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.646289 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecea3d2e-5a80-4c7e-854b-73b3555c6955-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.646299 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzb2w\" (UniqueName: \"kubernetes.io/projected/ecea3d2e-5a80-4c7e-854b-73b3555c6955-kube-api-access-dzb2w\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.857574 4833 generic.go:334] "Generic (PLEG): container finished" podID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerID="d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e" exitCode=0 Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.858040 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.858091 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" event={"ID":"ecea3d2e-5a80-4c7e-854b-73b3555c6955","Type":"ContainerDied","Data":"d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e"} Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.858235 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vsmdk" event={"ID":"ecea3d2e-5a80-4c7e-854b-73b3555c6955","Type":"ContainerDied","Data":"bca6037aa421b4f0234d4dae1c3fa1df0db2d825c437306aaafd38e543808e7c"} Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.858289 4833 scope.go:117] "RemoveContainer" containerID="d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.879753 4833 scope.go:117] "RemoveContainer" containerID="8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.905676 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vsmdk"] Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.916647 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vsmdk"] Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.923755 4833 scope.go:117] "RemoveContainer" containerID="d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e" Jan 27 14:36:46 crc kubenswrapper[4833]: E0127 14:36:46.924107 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e\": container with ID starting with d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e not found: ID does not exist" containerID="d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.924139 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e"} err="failed to get container status \"d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e\": rpc error: code = NotFound desc = could not find container \"d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e\": container with ID starting with d4aaf2862d06f85d6b0985dd93ecd0f83af44049573f8d13c6d26b8bc16ded8e not found: ID does not exist" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.924161 4833 scope.go:117] "RemoveContainer" containerID="8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15" Jan 27 14:36:46 crc kubenswrapper[4833]: E0127 14:36:46.924546 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15\": container with ID starting with 8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15 not found: ID does not exist" containerID="8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15" Jan 27 14:36:46 crc kubenswrapper[4833]: I0127 14:36:46.924577 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15"} err="failed to get container status \"8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15\": rpc error: code = NotFound desc = could not find container \"8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15\": container with ID starting with 8d0d902d2240fc9d5aec907ea60944ed4a852091861d8f664a5cc98901a3fe15 not found: ID does not exist" Jan 27 14:36:47 crc kubenswrapper[4833]: I0127 14:36:47.222589 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" path="/var/lib/kubelet/pods/ecea3d2e-5a80-4c7e-854b-73b3555c6955/volumes" Jan 27 14:36:49 crc kubenswrapper[4833]: I0127 14:36:49.888003 4833 generic.go:334] "Generic (PLEG): container finished" podID="b3d13b73-8eab-4c26-abe0-bdda094d795b" containerID="f501c380035f2d564771d200d6fada1dbddc209908d9ef78a16659bea39b334e" exitCode=0 Jan 27 14:36:49 crc kubenswrapper[4833]: I0127 14:36:49.888539 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3d13b73-8eab-4c26-abe0-bdda094d795b","Type":"ContainerDied","Data":"f501c380035f2d564771d200d6fada1dbddc209908d9ef78a16659bea39b334e"} Jan 27 14:36:50 crc kubenswrapper[4833]: I0127 14:36:50.897366 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b3d13b73-8eab-4c26-abe0-bdda094d795b","Type":"ContainerStarted","Data":"d71b2884fe901c77e9e6cecfefcd954d0e806803a9c8415225a9d953045551d7"} Jan 27 14:36:50 crc kubenswrapper[4833]: I0127 14:36:50.897855 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:36:50 crc kubenswrapper[4833]: I0127 14:36:50.899391 4833 generic.go:334] "Generic (PLEG): container finished" podID="90d23272-4189-49d5-9df5-e7347a122434" containerID="9409aa6682c35d521dfe5600274c456c313dd84c8e55aa0680b736c558a9732e" exitCode=0 Jan 27 14:36:50 crc kubenswrapper[4833]: I0127 14:36:50.899430 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90d23272-4189-49d5-9df5-e7347a122434","Type":"ContainerDied","Data":"9409aa6682c35d521dfe5600274c456c313dd84c8e55aa0680b736c558a9732e"} Jan 27 14:36:50 crc kubenswrapper[4833]: I0127 14:36:50.925642 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.925620369 podStartE2EDuration="36.925620369s" podCreationTimestamp="2026-01-27 14:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:50.916862784 +0000 UTC m=+1512.568187196" watchObservedRunningTime="2026-01-27 14:36:50.925620369 +0000 UTC m=+1512.576944771" Jan 27 14:36:51 crc kubenswrapper[4833]: I0127 14:36:51.911161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90d23272-4189-49d5-9df5-e7347a122434","Type":"ContainerStarted","Data":"7394e25a8d5e4b518fcecf121b91004e2a5bf1a12003c19fbf9ab02ee9cc11de"} Jan 27 14:36:51 crc kubenswrapper[4833]: I0127 14:36:51.911690 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:36:51 crc kubenswrapper[4833]: I0127 14:36:51.952691 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.952557731 podStartE2EDuration="36.952557731s" podCreationTimestamp="2026-01-27 14:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:51.93780042 +0000 UTC m=+1513.589124832" watchObservedRunningTime="2026-01-27 14:36:51.952557731 +0000 UTC m=+1513.603882133" Jan 27 14:36:52 crc kubenswrapper[4833]: I0127 14:36:52.312078 4833 scope.go:117] "RemoveContainer" containerID="8490ab1952cf3713fa98897b3b243166e18516a777630355c0518129f22ce82d" Jan 27 14:37:02 crc kubenswrapper[4833]: I0127 14:37:02.260320 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:37:02 crc kubenswrapper[4833]: I0127 14:37:02.260890 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.043694 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp"] Jan 27 14:37:04 crc kubenswrapper[4833]: E0127 14:37:04.044525 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerName="init" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.044544 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerName="init" Jan 27 14:37:04 crc kubenswrapper[4833]: E0127 14:37:04.044562 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerName="dnsmasq-dns" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.044570 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerName="dnsmasq-dns" Jan 27 14:37:04 crc kubenswrapper[4833]: E0127 14:37:04.044589 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" containerName="dnsmasq-dns" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.044597 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" containerName="dnsmasq-dns" Jan 27 14:37:04 crc kubenswrapper[4833]: E0127 14:37:04.044635 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" containerName="init" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.044643 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" containerName="init" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.044871 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b040472d-4a33-4442-bacd-23c8a68983c9" containerName="dnsmasq-dns" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.044921 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecea3d2e-5a80-4c7e-854b-73b3555c6955" containerName="dnsmasq-dns" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.045774 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.049126 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.049421 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.049678 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.052870 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.064812 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp"] Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.095760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.095857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.095936 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.096053 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb766\" (UniqueName: \"kubernetes.io/projected/4ac48901-049e-4506-b266-fa322c384c6b-kube-api-access-nb766\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.198820 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb766\" (UniqueName: \"kubernetes.io/projected/4ac48901-049e-4506-b266-fa322c384c6b-kube-api-access-nb766\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.199624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.199699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.199830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.206583 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.207015 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.216934 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.221734 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb766\" (UniqueName: \"kubernetes.io/projected/4ac48901-049e-4506-b266-fa322c384c6b-kube-api-access-nb766\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.379967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:04 crc kubenswrapper[4833]: I0127 14:37:04.985095 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp"] Jan 27 14:37:05 crc kubenswrapper[4833]: I0127 14:37:05.029762 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:37:05 crc kubenswrapper[4833]: I0127 14:37:05.042027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" event={"ID":"4ac48901-049e-4506-b266-fa322c384c6b","Type":"ContainerStarted","Data":"a299797f79f9f93072da9220720bf5aa2ba0b32491044afb08782c931038cc96"} Jan 27 14:37:06 crc kubenswrapper[4833]: I0127 14:37:06.095694 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:37:17 crc kubenswrapper[4833]: I0127 14:37:17.645227 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" podStartSLOduration=1.2907054009999999 podStartE2EDuration="13.64520485s" podCreationTimestamp="2026-01-27 14:37:04 +0000 UTC" firstStartedPulling="2026-01-27 14:37:04.988386214 +0000 UTC m=+1526.639710616" lastFinishedPulling="2026-01-27 14:37:17.342885653 +0000 UTC m=+1538.994210065" observedRunningTime="2026-01-27 14:37:17.634055376 +0000 UTC m=+1539.285379798" watchObservedRunningTime="2026-01-27 14:37:17.64520485 +0000 UTC m=+1539.296529262" Jan 27 14:37:18 crc kubenswrapper[4833]: I0127 14:37:18.624268 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" event={"ID":"4ac48901-049e-4506-b266-fa322c384c6b","Type":"ContainerStarted","Data":"a974af76bf8722c71ff16742712f1f1110c784f73f963ae6e01dadaedad0b2ab"} Jan 27 14:37:30 crc kubenswrapper[4833]: I0127 14:37:30.738675 4833 generic.go:334] "Generic (PLEG): container finished" podID="4ac48901-049e-4506-b266-fa322c384c6b" containerID="a974af76bf8722c71ff16742712f1f1110c784f73f963ae6e01dadaedad0b2ab" exitCode=0 Jan 27 14:37:30 crc kubenswrapper[4833]: I0127 14:37:30.739257 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" event={"ID":"4ac48901-049e-4506-b266-fa322c384c6b","Type":"ContainerDied","Data":"a974af76bf8722c71ff16742712f1f1110c784f73f963ae6e01dadaedad0b2ab"} Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.260513 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.260983 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.394488 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.488360 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-inventory\") pod \"4ac48901-049e-4506-b266-fa322c384c6b\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.488437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-repo-setup-combined-ca-bundle\") pod \"4ac48901-049e-4506-b266-fa322c384c6b\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.488578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb766\" (UniqueName: \"kubernetes.io/projected/4ac48901-049e-4506-b266-fa322c384c6b-kube-api-access-nb766\") pod \"4ac48901-049e-4506-b266-fa322c384c6b\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.488643 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-ssh-key-openstack-edpm-ipam\") pod \"4ac48901-049e-4506-b266-fa322c384c6b\" (UID: \"4ac48901-049e-4506-b266-fa322c384c6b\") " Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.496244 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac48901-049e-4506-b266-fa322c384c6b-kube-api-access-nb766" (OuterVolumeSpecName: "kube-api-access-nb766") pod "4ac48901-049e-4506-b266-fa322c384c6b" (UID: "4ac48901-049e-4506-b266-fa322c384c6b"). InnerVolumeSpecName "kube-api-access-nb766". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.498747 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4ac48901-049e-4506-b266-fa322c384c6b" (UID: "4ac48901-049e-4506-b266-fa322c384c6b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.522599 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ac48901-049e-4506-b266-fa322c384c6b" (UID: "4ac48901-049e-4506-b266-fa322c384c6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.535431 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-inventory" (OuterVolumeSpecName: "inventory") pod "4ac48901-049e-4506-b266-fa322c384c6b" (UID: "4ac48901-049e-4506-b266-fa322c384c6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.591215 4833 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.591264 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb766\" (UniqueName: \"kubernetes.io/projected/4ac48901-049e-4506-b266-fa322c384c6b-kube-api-access-nb766\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.591279 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.591292 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ac48901-049e-4506-b266-fa322c384c6b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.800856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" event={"ID":"4ac48901-049e-4506-b266-fa322c384c6b","Type":"ContainerDied","Data":"a299797f79f9f93072da9220720bf5aa2ba0b32491044afb08782c931038cc96"} Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.800904 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a299797f79f9f93072da9220720bf5aa2ba0b32491044afb08782c931038cc96" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.800927 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.894495 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk"] Jan 27 14:37:32 crc kubenswrapper[4833]: E0127 14:37:32.895254 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac48901-049e-4506-b266-fa322c384c6b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.895291 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac48901-049e-4506-b266-fa322c384c6b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.895655 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac48901-049e-4506-b266-fa322c384c6b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.896837 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.899066 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.899865 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.900075 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.901591 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.913355 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk"] Jan 27 14:37:32 crc kubenswrapper[4833]: I0127 14:37:32.999541 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.000124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.000213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5sn\" (UniqueName: \"kubernetes.io/projected/7a00a943-178b-4719-b687-d8dc678f41bd-kube-api-access-hr5sn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.101920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5sn\" (UniqueName: \"kubernetes.io/projected/7a00a943-178b-4719-b687-d8dc678f41bd-kube-api-access-hr5sn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.102021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.102235 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.108283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.108331 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.129439 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5sn\" (UniqueName: \"kubernetes.io/projected/7a00a943-178b-4719-b687-d8dc678f41bd-kube-api-access-hr5sn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2fsrk\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:33 crc kubenswrapper[4833]: I0127 14:37:33.224031 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:34 crc kubenswrapper[4833]: I0127 14:37:34.108248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk"] Jan 27 14:37:34 crc kubenswrapper[4833]: I0127 14:37:34.834285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" event={"ID":"7a00a943-178b-4719-b687-d8dc678f41bd","Type":"ContainerStarted","Data":"a3d63d55df25d5ca9b540193debd9b29eafacc79d883c01f86e6b0e6553f039c"} Jan 27 14:37:36 crc kubenswrapper[4833]: I0127 14:37:36.874737 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" event={"ID":"7a00a943-178b-4719-b687-d8dc678f41bd","Type":"ContainerStarted","Data":"79b26e6fff5e37922a350f8f826688c712b133a22ad5b01fc08f888a258170b6"} Jan 27 14:37:36 crc kubenswrapper[4833]: I0127 14:37:36.909006 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" podStartSLOduration=3.444392279 podStartE2EDuration="4.908986837s" podCreationTimestamp="2026-01-27 14:37:32 +0000 UTC" firstStartedPulling="2026-01-27 14:37:34.107456591 +0000 UTC m=+1555.758780993" lastFinishedPulling="2026-01-27 14:37:35.572051139 +0000 UTC m=+1557.223375551" observedRunningTime="2026-01-27 14:37:36.906969957 +0000 UTC m=+1558.558294389" watchObservedRunningTime="2026-01-27 14:37:36.908986837 +0000 UTC m=+1558.560311239" Jan 27 14:37:38 crc kubenswrapper[4833]: I0127 14:37:38.901640 4833 generic.go:334] "Generic (PLEG): container finished" podID="7a00a943-178b-4719-b687-d8dc678f41bd" containerID="79b26e6fff5e37922a350f8f826688c712b133a22ad5b01fc08f888a258170b6" exitCode=0 Jan 27 14:37:38 crc kubenswrapper[4833]: I0127 14:37:38.901782 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" event={"ID":"7a00a943-178b-4719-b687-d8dc678f41bd","Type":"ContainerDied","Data":"79b26e6fff5e37922a350f8f826688c712b133a22ad5b01fc08f888a258170b6"} Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.607230 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xkzls"] Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.611753 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.637849 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkzls"] Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.754877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-utilities\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.754969 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpr2l\" (UniqueName: \"kubernetes.io/projected/8c31bbe3-2c71-4920-8b00-b9e0820856f0-kube-api-access-qpr2l\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.755002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-catalog-content\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.856827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-utilities\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.856944 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpr2l\" (UniqueName: \"kubernetes.io/projected/8c31bbe3-2c71-4920-8b00-b9e0820856f0-kube-api-access-qpr2l\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.856981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-catalog-content\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.857510 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-utilities\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.857544 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-catalog-content\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.877014 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpr2l\" (UniqueName: \"kubernetes.io/projected/8c31bbe3-2c71-4920-8b00-b9e0820856f0-kube-api-access-qpr2l\") pod \"community-operators-xkzls\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:39 crc kubenswrapper[4833]: I0127 14:37:39.940053 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.469436 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.571239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-ssh-key-openstack-edpm-ipam\") pod \"7a00a943-178b-4719-b687-d8dc678f41bd\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.571321 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr5sn\" (UniqueName: \"kubernetes.io/projected/7a00a943-178b-4719-b687-d8dc678f41bd-kube-api-access-hr5sn\") pod \"7a00a943-178b-4719-b687-d8dc678f41bd\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.571379 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-inventory\") pod \"7a00a943-178b-4719-b687-d8dc678f41bd\" (UID: \"7a00a943-178b-4719-b687-d8dc678f41bd\") " Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.577569 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a00a943-178b-4719-b687-d8dc678f41bd-kube-api-access-hr5sn" (OuterVolumeSpecName: "kube-api-access-hr5sn") pod "7a00a943-178b-4719-b687-d8dc678f41bd" (UID: "7a00a943-178b-4719-b687-d8dc678f41bd"). InnerVolumeSpecName "kube-api-access-hr5sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.605389 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-inventory" (OuterVolumeSpecName: "inventory") pod "7a00a943-178b-4719-b687-d8dc678f41bd" (UID: "7a00a943-178b-4719-b687-d8dc678f41bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:40 crc kubenswrapper[4833]: W0127 14:37:40.608565 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c31bbe3_2c71_4920_8b00_b9e0820856f0.slice/crio-4b5514131fc19e445f17ee5f23db155fa38573c41bf17271821aeeab4193058e WatchSource:0}: Error finding container 4b5514131fc19e445f17ee5f23db155fa38573c41bf17271821aeeab4193058e: Status 404 returned error can't find the container with id 4b5514131fc19e445f17ee5f23db155fa38573c41bf17271821aeeab4193058e Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.610302 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkzls"] Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.612332 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a00a943-178b-4719-b687-d8dc678f41bd" (UID: "7a00a943-178b-4719-b687-d8dc678f41bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.673505 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr5sn\" (UniqueName: \"kubernetes.io/projected/7a00a943-178b-4719-b687-d8dc678f41bd-kube-api-access-hr5sn\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.673559 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.673573 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a00a943-178b-4719-b687-d8dc678f41bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.927950 4833 generic.go:334] "Generic (PLEG): container finished" podID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerID="ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2" exitCode=0 Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.928046 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerDied","Data":"ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2"} Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.928085 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerStarted","Data":"4b5514131fc19e445f17ee5f23db155fa38573c41bf17271821aeeab4193058e"} Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.933850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" event={"ID":"7a00a943-178b-4719-b687-d8dc678f41bd","Type":"ContainerDied","Data":"a3d63d55df25d5ca9b540193debd9b29eafacc79d883c01f86e6b0e6553f039c"} Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.933932 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d63d55df25d5ca9b540193debd9b29eafacc79d883c01f86e6b0e6553f039c" Jan 27 14:37:40 crc kubenswrapper[4833]: I0127 14:37:40.933894 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2fsrk" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.044870 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd"] Jan 27 14:37:41 crc kubenswrapper[4833]: E0127 14:37:41.045505 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a00a943-178b-4719-b687-d8dc678f41bd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.045525 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a00a943-178b-4719-b687-d8dc678f41bd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.045766 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a00a943-178b-4719-b687-d8dc678f41bd" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.046657 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.050969 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.051358 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.051587 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.051773 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.057146 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd"] Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.085808 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrl72\" (UniqueName: \"kubernetes.io/projected/2c857060-4fc1-48fd-86e9-e17957d53607-kube-api-access-qrl72\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.085855 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.085903 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.086672 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.188803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.188961 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrl72\" (UniqueName: \"kubernetes.io/projected/2c857060-4fc1-48fd-86e9-e17957d53607-kube-api-access-qrl72\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.189002 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.189069 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.194215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.194309 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.195380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.220068 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrl72\" (UniqueName: \"kubernetes.io/projected/2c857060-4fc1-48fd-86e9-e17957d53607-kube-api-access-qrl72\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.362549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.918763 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd"] Jan 27 14:37:41 crc kubenswrapper[4833]: W0127 14:37:41.924816 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c857060_4fc1_48fd_86e9_e17957d53607.slice/crio-c4734dbd6a32add923ca588a635fd5189d77fcf581d6ff5f262c92b422d5290a WatchSource:0}: Error finding container c4734dbd6a32add923ca588a635fd5189d77fcf581d6ff5f262c92b422d5290a: Status 404 returned error can't find the container with id c4734dbd6a32add923ca588a635fd5189d77fcf581d6ff5f262c92b422d5290a Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.945593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" event={"ID":"2c857060-4fc1-48fd-86e9-e17957d53607","Type":"ContainerStarted","Data":"c4734dbd6a32add923ca588a635fd5189d77fcf581d6ff5f262c92b422d5290a"} Jan 27 14:37:41 crc kubenswrapper[4833]: I0127 14:37:41.948038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerStarted","Data":"df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99"} Jan 27 14:37:42 crc kubenswrapper[4833]: I0127 14:37:42.965831 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" event={"ID":"2c857060-4fc1-48fd-86e9-e17957d53607","Type":"ContainerStarted","Data":"7ee36984d985936fd38a37878218dc244c60e4de85b1bda272bb3fa48b66e433"} Jan 27 14:37:42 crc kubenswrapper[4833]: I0127 14:37:42.969675 4833 generic.go:334] "Generic (PLEG): container finished" podID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerID="df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99" exitCode=0 Jan 27 14:37:42 crc kubenswrapper[4833]: I0127 14:37:42.969702 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerDied","Data":"df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99"} Jan 27 14:37:43 crc kubenswrapper[4833]: I0127 14:37:43.013297 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" podStartSLOduration=1.5797394 podStartE2EDuration="2.013266576s" podCreationTimestamp="2026-01-27 14:37:41 +0000 UTC" firstStartedPulling="2026-01-27 14:37:41.926593728 +0000 UTC m=+1563.577918130" lastFinishedPulling="2026-01-27 14:37:42.360120894 +0000 UTC m=+1564.011445306" observedRunningTime="2026-01-27 14:37:42.996606278 +0000 UTC m=+1564.647930740" watchObservedRunningTime="2026-01-27 14:37:43.013266576 +0000 UTC m=+1564.664591008" Jan 27 14:37:43 crc kubenswrapper[4833]: I0127 14:37:43.985155 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerStarted","Data":"3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21"} Jan 27 14:37:44 crc kubenswrapper[4833]: I0127 14:37:44.018205 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xkzls" podStartSLOduration=2.479914961 podStartE2EDuration="5.018172909s" podCreationTimestamp="2026-01-27 14:37:39 +0000 UTC" firstStartedPulling="2026-01-27 14:37:40.93160644 +0000 UTC m=+1562.582930892" lastFinishedPulling="2026-01-27 14:37:43.469864418 +0000 UTC m=+1565.121188840" observedRunningTime="2026-01-27 14:37:44.005882477 +0000 UTC m=+1565.657206929" watchObservedRunningTime="2026-01-27 14:37:44.018172909 +0000 UTC m=+1565.669497351" Jan 27 14:37:49 crc kubenswrapper[4833]: I0127 14:37:49.940435 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:49 crc kubenswrapper[4833]: I0127 14:37:49.942229 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:50 crc kubenswrapper[4833]: I0127 14:37:50.022897 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:50 crc kubenswrapper[4833]: I0127 14:37:50.146655 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:50 crc kubenswrapper[4833]: I0127 14:37:50.279341 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkzls"] Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.088426 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xkzls" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="registry-server" containerID="cri-o://3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21" gracePeriod=2 Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.423620 4833 scope.go:117] "RemoveContainer" containerID="64567016eb39649dd5fafa726e572fc1e7050b9a129371be6987bf8897eb27b6" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.520816 4833 scope.go:117] "RemoveContainer" containerID="f2c2ac8c4b8f5f8a2738bf99f826e32f49d5267d2a4f1954669c0a2a08017808" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.554652 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.756499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-utilities\") pod \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.756625 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-catalog-content\") pod \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.756808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpr2l\" (UniqueName: \"kubernetes.io/projected/8c31bbe3-2c71-4920-8b00-b9e0820856f0-kube-api-access-qpr2l\") pod \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\" (UID: \"8c31bbe3-2c71-4920-8b00-b9e0820856f0\") " Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.759508 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-utilities" (OuterVolumeSpecName: "utilities") pod "8c31bbe3-2c71-4920-8b00-b9e0820856f0" (UID: "8c31bbe3-2c71-4920-8b00-b9e0820856f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.770761 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c31bbe3-2c71-4920-8b00-b9e0820856f0-kube-api-access-qpr2l" (OuterVolumeSpecName: "kube-api-access-qpr2l") pod "8c31bbe3-2c71-4920-8b00-b9e0820856f0" (UID: "8c31bbe3-2c71-4920-8b00-b9e0820856f0"). InnerVolumeSpecName "kube-api-access-qpr2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.830758 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c31bbe3-2c71-4920-8b00-b9e0820856f0" (UID: "8c31bbe3-2c71-4920-8b00-b9e0820856f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.860798 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.860905 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c31bbe3-2c71-4920-8b00-b9e0820856f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:52 crc kubenswrapper[4833]: I0127 14:37:52.860930 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpr2l\" (UniqueName: \"kubernetes.io/projected/8c31bbe3-2c71-4920-8b00-b9e0820856f0-kube-api-access-qpr2l\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.107242 4833 generic.go:334] "Generic (PLEG): container finished" podID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerID="3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21" exitCode=0 Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.107286 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerDied","Data":"3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21"} Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.107322 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkzls" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.107336 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkzls" event={"ID":"8c31bbe3-2c71-4920-8b00-b9e0820856f0","Type":"ContainerDied","Data":"4b5514131fc19e445f17ee5f23db155fa38573c41bf17271821aeeab4193058e"} Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.107360 4833 scope.go:117] "RemoveContainer" containerID="3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.135343 4833 scope.go:117] "RemoveContainer" containerID="df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.159047 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkzls"] Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.167943 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xkzls"] Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.169923 4833 scope.go:117] "RemoveContainer" containerID="ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.196784 4833 scope.go:117] "RemoveContainer" containerID="3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21" Jan 27 14:37:53 crc kubenswrapper[4833]: E0127 14:37:53.197890 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21\": container with ID starting with 3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21 not found: ID does not exist" containerID="3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.197940 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21"} err="failed to get container status \"3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21\": rpc error: code = NotFound desc = could not find container \"3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21\": container with ID starting with 3ab381337c77d484d7b44a426ca79cd7a436f6c52e5dd914b348bec620152f21 not found: ID does not exist" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.198086 4833 scope.go:117] "RemoveContainer" containerID="df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99" Jan 27 14:37:53 crc kubenswrapper[4833]: E0127 14:37:53.198807 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99\": container with ID starting with df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99 not found: ID does not exist" containerID="df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.198844 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99"} err="failed to get container status \"df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99\": rpc error: code = NotFound desc = could not find container \"df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99\": container with ID starting with df89cfd0c08fe3fe0c81a6077500d2b9d56447486873e80944bc83f36aaaba99 not found: ID does not exist" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.198866 4833 scope.go:117] "RemoveContainer" containerID="ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2" Jan 27 14:37:53 crc kubenswrapper[4833]: E0127 14:37:53.199244 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2\": container with ID starting with ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2 not found: ID does not exist" containerID="ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.199264 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2"} err="failed to get container status \"ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2\": rpc error: code = NotFound desc = could not find container \"ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2\": container with ID starting with ac4123eab32ff6b5956e3dd9c80a283e7be8eda8c60232a4c1ae9bb3ed99abf2 not found: ID does not exist" Jan 27 14:37:53 crc kubenswrapper[4833]: I0127 14:37:53.223254 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" path="/var/lib/kubelet/pods/8c31bbe3-2c71-4920-8b00-b9e0820856f0/volumes" Jan 27 14:38:02 crc kubenswrapper[4833]: I0127 14:38:02.261562 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:38:02 crc kubenswrapper[4833]: I0127 14:38:02.262136 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:38:02 crc kubenswrapper[4833]: I0127 14:38:02.262205 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:38:02 crc kubenswrapper[4833]: I0127 14:38:02.263260 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:38:02 crc kubenswrapper[4833]: I0127 14:38:02.263345 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" gracePeriod=600 Jan 27 14:38:02 crc kubenswrapper[4833]: E0127 14:38:02.391055 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:38:03 crc kubenswrapper[4833]: I0127 14:38:03.247600 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" exitCode=0 Jan 27 14:38:03 crc kubenswrapper[4833]: I0127 14:38:03.247651 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce"} Jan 27 14:38:03 crc kubenswrapper[4833]: I0127 14:38:03.247689 4833 scope.go:117] "RemoveContainer" containerID="c0206b6d2836f14765d1c04bb41bfbd60d766e4b0e2c4de5107dd75cf3400e10" Jan 27 14:38:03 crc kubenswrapper[4833]: I0127 14:38:03.248426 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:38:03 crc kubenswrapper[4833]: E0127 14:38:03.248885 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:38:16 crc kubenswrapper[4833]: I0127 14:38:16.211564 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:38:16 crc kubenswrapper[4833]: E0127 14:38:16.212568 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:38:30 crc kubenswrapper[4833]: I0127 14:38:30.211509 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:38:30 crc kubenswrapper[4833]: E0127 14:38:30.212689 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:38:42 crc kubenswrapper[4833]: I0127 14:38:42.211314 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:38:42 crc kubenswrapper[4833]: E0127 14:38:42.212204 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:38:52 crc kubenswrapper[4833]: I0127 14:38:52.656350 4833 scope.go:117] "RemoveContainer" containerID="71c1c059727ffa2ea65f10c1821c6e6ffd3ed28cea15698be92ba762e5cefec0" Jan 27 14:38:52 crc kubenswrapper[4833]: I0127 14:38:52.857821 4833 scope.go:117] "RemoveContainer" containerID="bb2885ee72f3d9527f85bbf5fbf469bcf45058d8227ea6e033e8e2bf2956395d" Jan 27 14:38:53 crc kubenswrapper[4833]: I0127 14:38:53.227154 4833 scope.go:117] "RemoveContainer" containerID="49b577c1fde8b5cbe8ecc68b5275570b3e97fba82792f204d3c4e448bf975eb4" Jan 27 14:38:53 crc kubenswrapper[4833]: I0127 14:38:53.262944 4833 scope.go:117] "RemoveContainer" containerID="5649838668c320f607064d2a42be88f546636ab9f5c01ac6f06dd56dacba639e" Jan 27 14:38:53 crc kubenswrapper[4833]: I0127 14:38:53.301524 4833 scope.go:117] "RemoveContainer" containerID="1448574a4fa6bf4f370a7b1032c0cc545ac2d9f1f7f3845ce6f0c685b43efdd7" Jan 27 14:38:54 crc kubenswrapper[4833]: I0127 14:38:54.211552 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:38:54 crc kubenswrapper[4833]: E0127 14:38:54.211972 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:39:05 crc kubenswrapper[4833]: I0127 14:39:05.211233 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:39:05 crc kubenswrapper[4833]: E0127 14:39:05.212322 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:39:20 crc kubenswrapper[4833]: I0127 14:39:20.210315 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:39:20 crc kubenswrapper[4833]: E0127 14:39:20.211011 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:39:33 crc kubenswrapper[4833]: I0127 14:39:33.210485 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:39:33 crc kubenswrapper[4833]: E0127 14:39:33.211382 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:39:46 crc kubenswrapper[4833]: I0127 14:39:46.210975 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:39:46 crc kubenswrapper[4833]: E0127 14:39:46.211994 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:39:58 crc kubenswrapper[4833]: I0127 14:39:58.210576 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:39:58 crc kubenswrapper[4833]: E0127 14:39:58.211427 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:40:10 crc kubenswrapper[4833]: I0127 14:40:10.211176 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:40:10 crc kubenswrapper[4833]: E0127 14:40:10.211962 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:40:25 crc kubenswrapper[4833]: I0127 14:40:25.210416 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:40:25 crc kubenswrapper[4833]: E0127 14:40:25.211227 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:40:40 crc kubenswrapper[4833]: I0127 14:40:40.210930 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:40:40 crc kubenswrapper[4833]: E0127 14:40:40.212782 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:40:55 crc kubenswrapper[4833]: I0127 14:40:55.211702 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:40:55 crc kubenswrapper[4833]: E0127 14:40:55.213335 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.050806 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-t27br"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.062212 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-z9mm9"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.074310 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-402d-account-create-update-8pv4k"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.083682 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d9e5-account-create-update-s49bs"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.093028 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-t27br"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.102644 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-z9mm9"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.112295 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-402d-account-create-update-8pv4k"] Jan 27 14:41:04 crc kubenswrapper[4833]: I0127 14:41:04.120343 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d9e5-account-create-update-s49bs"] Jan 27 14:41:05 crc kubenswrapper[4833]: I0127 14:41:05.236175 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d869cb-334d-4d2a-917c-25fe86c0610b" path="/var/lib/kubelet/pods/03d869cb-334d-4d2a-917c-25fe86c0610b/volumes" Jan 27 14:41:05 crc kubenswrapper[4833]: I0127 14:41:05.253470 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11048d75-f33a-45c7-867a-a1ba4eb48e52" path="/var/lib/kubelet/pods/11048d75-f33a-45c7-867a-a1ba4eb48e52/volumes" Jan 27 14:41:05 crc kubenswrapper[4833]: I0127 14:41:05.254291 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19bc9850-2467-4ba0-a2bd-d901ad222ea1" path="/var/lib/kubelet/pods/19bc9850-2467-4ba0-a2bd-d901ad222ea1/volumes" Jan 27 14:41:05 crc kubenswrapper[4833]: I0127 14:41:05.268893 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb" path="/var/lib/kubelet/pods/dd00b36b-7dd8-44b6-82f2-6b3dd9dbd0fb/volumes" Jan 27 14:41:08 crc kubenswrapper[4833]: I0127 14:41:08.275764 4833 generic.go:334] "Generic (PLEG): container finished" podID="2c857060-4fc1-48fd-86e9-e17957d53607" containerID="7ee36984d985936fd38a37878218dc244c60e4de85b1bda272bb3fa48b66e433" exitCode=0 Jan 27 14:41:08 crc kubenswrapper[4833]: I0127 14:41:08.275882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" event={"ID":"2c857060-4fc1-48fd-86e9-e17957d53607","Type":"ContainerDied","Data":"7ee36984d985936fd38a37878218dc244c60e4de85b1bda272bb3fa48b66e433"} Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.692251 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.869914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-inventory\") pod \"2c857060-4fc1-48fd-86e9-e17957d53607\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.870217 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-ssh-key-openstack-edpm-ipam\") pod \"2c857060-4fc1-48fd-86e9-e17957d53607\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.870256 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-bootstrap-combined-ca-bundle\") pod \"2c857060-4fc1-48fd-86e9-e17957d53607\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.870295 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrl72\" (UniqueName: \"kubernetes.io/projected/2c857060-4fc1-48fd-86e9-e17957d53607-kube-api-access-qrl72\") pod \"2c857060-4fc1-48fd-86e9-e17957d53607\" (UID: \"2c857060-4fc1-48fd-86e9-e17957d53607\") " Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.876474 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c857060-4fc1-48fd-86e9-e17957d53607-kube-api-access-qrl72" (OuterVolumeSpecName: "kube-api-access-qrl72") pod "2c857060-4fc1-48fd-86e9-e17957d53607" (UID: "2c857060-4fc1-48fd-86e9-e17957d53607"). InnerVolumeSpecName "kube-api-access-qrl72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.877330 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2c857060-4fc1-48fd-86e9-e17957d53607" (UID: "2c857060-4fc1-48fd-86e9-e17957d53607"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.900102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2c857060-4fc1-48fd-86e9-e17957d53607" (UID: "2c857060-4fc1-48fd-86e9-e17957d53607"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.900617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-inventory" (OuterVolumeSpecName: "inventory") pod "2c857060-4fc1-48fd-86e9-e17957d53607" (UID: "2c857060-4fc1-48fd-86e9-e17957d53607"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.973115 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.973436 4833 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.973470 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrl72\" (UniqueName: \"kubernetes.io/projected/2c857060-4fc1-48fd-86e9-e17957d53607-kube-api-access-qrl72\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:09 crc kubenswrapper[4833]: I0127 14:41:09.973483 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c857060-4fc1-48fd-86e9-e17957d53607-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.211412 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:41:10 crc kubenswrapper[4833]: E0127 14:41:10.211910 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.298611 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" event={"ID":"2c857060-4fc1-48fd-86e9-e17957d53607","Type":"ContainerDied","Data":"c4734dbd6a32add923ca588a635fd5189d77fcf581d6ff5f262c92b422d5290a"} Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.298656 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4734dbd6a32add923ca588a635fd5189d77fcf581d6ff5f262c92b422d5290a" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.298674 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414079 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz"] Jan 27 14:41:10 crc kubenswrapper[4833]: E0127 14:41:10.414554 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="extract-content" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414574 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="extract-content" Jan 27 14:41:10 crc kubenswrapper[4833]: E0127 14:41:10.414596 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c857060-4fc1-48fd-86e9-e17957d53607" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414603 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c857060-4fc1-48fd-86e9-e17957d53607" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:41:10 crc kubenswrapper[4833]: E0127 14:41:10.414615 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="registry-server" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414621 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="registry-server" Jan 27 14:41:10 crc kubenswrapper[4833]: E0127 14:41:10.414645 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="extract-utilities" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414651 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="extract-utilities" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414838 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c857060-4fc1-48fd-86e9-e17957d53607" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.414860 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c31bbe3-2c71-4920-8b00-b9e0820856f0" containerName="registry-server" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.415583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.419237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.419472 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.419708 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.421958 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.426305 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz"] Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.588489 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.588558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.588675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgmw\" (UniqueName: \"kubernetes.io/projected/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-kube-api-access-gwgmw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.690786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.690996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwgmw\" (UniqueName: \"kubernetes.io/projected/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-kube-api-access-gwgmw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.691143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.694789 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.696010 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.716513 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwgmw\" (UniqueName: \"kubernetes.io/projected/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-kube-api-access-gwgmw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:10 crc kubenswrapper[4833]: I0127 14:41:10.748599 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:41:11 crc kubenswrapper[4833]: I0127 14:41:11.380792 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz"] Jan 27 14:41:11 crc kubenswrapper[4833]: I0127 14:41:11.392266 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.044189 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-224ld"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.053977 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7dd7-account-create-update-b599r"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.069629 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kmtpk"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.081641 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-224ld"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.098236 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f0ec-account-create-update-ptncm"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.112563 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7dd7-account-create-update-b599r"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.123632 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f0ec-account-create-update-ptncm"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.138889 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kmtpk"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.323663 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" event={"ID":"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68","Type":"ContainerStarted","Data":"43b1f0825a3d826b1b47e6a6eaa31218de5169307b5834ad1677e853ddd892bd"} Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.530935 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j7n4v"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.535453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.556152 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7n4v"] Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.731320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-catalog-content\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.731672 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6ph\" (UniqueName: \"kubernetes.io/projected/6437f63d-06ba-4163-a029-b2194c084535-kube-api-access-rv6ph\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.731834 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-utilities\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.833676 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-utilities\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.833817 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-catalog-content\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.833847 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6ph\" (UniqueName: \"kubernetes.io/projected/6437f63d-06ba-4163-a029-b2194c084535-kube-api-access-rv6ph\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.834429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-utilities\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.834558 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-catalog-content\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.855795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6ph\" (UniqueName: \"kubernetes.io/projected/6437f63d-06ba-4163-a029-b2194c084535-kube-api-access-rv6ph\") pod \"redhat-marketplace-j7n4v\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:12 crc kubenswrapper[4833]: I0127 14:41:12.859466 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.223333 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16db8127-7542-4d49-bd70-1b2a994d842e" path="/var/lib/kubelet/pods/16db8127-7542-4d49-bd70-1b2a994d842e/volumes" Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.225500 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c77aee3-afff-40bc-b8c8-963be9bc87ab" path="/var/lib/kubelet/pods/4c77aee3-afff-40bc-b8c8-963be9bc87ab/volumes" Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.226185 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b22d86-0d40-49e9-ac4d-7cc87d99c800" path="/var/lib/kubelet/pods/84b22d86-0d40-49e9-ac4d-7cc87d99c800/volumes" Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.226862 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d43701-074b-492e-82bc-7745956bb701" path="/var/lib/kubelet/pods/e4d43701-074b-492e-82bc-7745956bb701/volumes" Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.333179 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" event={"ID":"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68","Type":"ContainerStarted","Data":"42ee57f514419bb27a02e50907e2e9803da7bc540e9584ab7f69672a17153344"} Jan 27 14:41:13 crc kubenswrapper[4833]: W0127 14:41:13.360850 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6437f63d_06ba_4163_a029_b2194c084535.slice/crio-a50207e8bfbbd6741455018d78614eae0fbca1b5ebfb794c36ae5bc839877c0d WatchSource:0}: Error finding container a50207e8bfbbd6741455018d78614eae0fbca1b5ebfb794c36ae5bc839877c0d: Status 404 returned error can't find the container with id a50207e8bfbbd6741455018d78614eae0fbca1b5ebfb794c36ae5bc839877c0d Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.365429 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7n4v"] Jan 27 14:41:13 crc kubenswrapper[4833]: I0127 14:41:13.366790 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" podStartSLOduration=2.661566315 podStartE2EDuration="3.366776222s" podCreationTimestamp="2026-01-27 14:41:10 +0000 UTC" firstStartedPulling="2026-01-27 14:41:11.392023393 +0000 UTC m=+1773.043347795" lastFinishedPulling="2026-01-27 14:41:12.0972333 +0000 UTC m=+1773.748557702" observedRunningTime="2026-01-27 14:41:13.351583663 +0000 UTC m=+1775.002908065" watchObservedRunningTime="2026-01-27 14:41:13.366776222 +0000 UTC m=+1775.018100644" Jan 27 14:41:14 crc kubenswrapper[4833]: I0127 14:41:14.348969 4833 generic.go:334] "Generic (PLEG): container finished" podID="6437f63d-06ba-4163-a029-b2194c084535" containerID="65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79" exitCode=0 Jan 27 14:41:14 crc kubenswrapper[4833]: I0127 14:41:14.349047 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7n4v" event={"ID":"6437f63d-06ba-4163-a029-b2194c084535","Type":"ContainerDied","Data":"65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79"} Jan 27 14:41:14 crc kubenswrapper[4833]: I0127 14:41:14.349345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7n4v" event={"ID":"6437f63d-06ba-4163-a029-b2194c084535","Type":"ContainerStarted","Data":"a50207e8bfbbd6741455018d78614eae0fbca1b5ebfb794c36ae5bc839877c0d"} Jan 27 14:41:16 crc kubenswrapper[4833]: I0127 14:41:16.369647 4833 generic.go:334] "Generic (PLEG): container finished" podID="6437f63d-06ba-4163-a029-b2194c084535" containerID="fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5" exitCode=0 Jan 27 14:41:16 crc kubenswrapper[4833]: I0127 14:41:16.369724 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7n4v" event={"ID":"6437f63d-06ba-4163-a029-b2194c084535","Type":"ContainerDied","Data":"fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5"} Jan 27 14:41:17 crc kubenswrapper[4833]: I0127 14:41:17.382773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7n4v" event={"ID":"6437f63d-06ba-4163-a029-b2194c084535","Type":"ContainerStarted","Data":"15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055"} Jan 27 14:41:17 crc kubenswrapper[4833]: I0127 14:41:17.408665 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j7n4v" podStartSLOduration=2.810197779 podStartE2EDuration="5.408639362s" podCreationTimestamp="2026-01-27 14:41:12 +0000 UTC" firstStartedPulling="2026-01-27 14:41:14.351725015 +0000 UTC m=+1776.003049417" lastFinishedPulling="2026-01-27 14:41:16.950166598 +0000 UTC m=+1778.601491000" observedRunningTime="2026-01-27 14:41:17.405130524 +0000 UTC m=+1779.056454946" watchObservedRunningTime="2026-01-27 14:41:17.408639362 +0000 UTC m=+1779.059963764" Jan 27 14:41:22 crc kubenswrapper[4833]: I0127 14:41:22.860001 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:22 crc kubenswrapper[4833]: I0127 14:41:22.860566 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:22 crc kubenswrapper[4833]: I0127 14:41:22.907021 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:23 crc kubenswrapper[4833]: I0127 14:41:23.211422 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:41:23 crc kubenswrapper[4833]: E0127 14:41:23.211821 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:41:23 crc kubenswrapper[4833]: I0127 14:41:23.487892 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:26 crc kubenswrapper[4833]: I0127 14:41:26.521464 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7n4v"] Jan 27 14:41:26 crc kubenswrapper[4833]: I0127 14:41:26.521739 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j7n4v" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="registry-server" containerID="cri-o://15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055" gracePeriod=2 Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.051897 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qmw5s"] Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.066410 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qmw5s"] Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.103710 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.221033 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ae6c77-5b30-4091-b78e-6ce66768de51" path="/var/lib/kubelet/pods/61ae6c77-5b30-4091-b78e-6ce66768de51/volumes" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.264337 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-utilities\") pod \"6437f63d-06ba-4163-a029-b2194c084535\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.264427 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-catalog-content\") pod \"6437f63d-06ba-4163-a029-b2194c084535\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.264591 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv6ph\" (UniqueName: \"kubernetes.io/projected/6437f63d-06ba-4163-a029-b2194c084535-kube-api-access-rv6ph\") pod \"6437f63d-06ba-4163-a029-b2194c084535\" (UID: \"6437f63d-06ba-4163-a029-b2194c084535\") " Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.266400 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-utilities" (OuterVolumeSpecName: "utilities") pod "6437f63d-06ba-4163-a029-b2194c084535" (UID: "6437f63d-06ba-4163-a029-b2194c084535"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.276106 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6437f63d-06ba-4163-a029-b2194c084535-kube-api-access-rv6ph" (OuterVolumeSpecName: "kube-api-access-rv6ph") pod "6437f63d-06ba-4163-a029-b2194c084535" (UID: "6437f63d-06ba-4163-a029-b2194c084535"). InnerVolumeSpecName "kube-api-access-rv6ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.303676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6437f63d-06ba-4163-a029-b2194c084535" (UID: "6437f63d-06ba-4163-a029-b2194c084535"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.367504 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv6ph\" (UniqueName: \"kubernetes.io/projected/6437f63d-06ba-4163-a029-b2194c084535-kube-api-access-rv6ph\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.367537 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.367552 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6437f63d-06ba-4163-a029-b2194c084535-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.480403 4833 generic.go:334] "Generic (PLEG): container finished" podID="6437f63d-06ba-4163-a029-b2194c084535" containerID="15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055" exitCode=0 Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.480611 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7n4v" event={"ID":"6437f63d-06ba-4163-a029-b2194c084535","Type":"ContainerDied","Data":"15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055"} Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.481058 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7n4v" event={"ID":"6437f63d-06ba-4163-a029-b2194c084535","Type":"ContainerDied","Data":"a50207e8bfbbd6741455018d78614eae0fbca1b5ebfb794c36ae5bc839877c0d"} Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.481093 4833 scope.go:117] "RemoveContainer" containerID="15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.480717 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7n4v" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.514151 4833 scope.go:117] "RemoveContainer" containerID="fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.521376 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7n4v"] Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.531985 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7n4v"] Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.547253 4833 scope.go:117] "RemoveContainer" containerID="65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.581813 4833 scope.go:117] "RemoveContainer" containerID="15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055" Jan 27 14:41:27 crc kubenswrapper[4833]: E0127 14:41:27.582232 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055\": container with ID starting with 15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055 not found: ID does not exist" containerID="15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.582279 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055"} err="failed to get container status \"15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055\": rpc error: code = NotFound desc = could not find container \"15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055\": container with ID starting with 15bcd43f09b8fed07931fa1b078ead291843e52e189603bb4ef8d6c2c333a055 not found: ID does not exist" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.582310 4833 scope.go:117] "RemoveContainer" containerID="fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5" Jan 27 14:41:27 crc kubenswrapper[4833]: E0127 14:41:27.582684 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5\": container with ID starting with fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5 not found: ID does not exist" containerID="fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.582713 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5"} err="failed to get container status \"fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5\": rpc error: code = NotFound desc = could not find container \"fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5\": container with ID starting with fb4d3466d73304d9334c3327cb8dc375dc8bf4f2784b6f3d38622b405009a7e5 not found: ID does not exist" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.582733 4833 scope.go:117] "RemoveContainer" containerID="65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79" Jan 27 14:41:27 crc kubenswrapper[4833]: E0127 14:41:27.582963 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79\": container with ID starting with 65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79 not found: ID does not exist" containerID="65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79" Jan 27 14:41:27 crc kubenswrapper[4833]: I0127 14:41:27.582996 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79"} err="failed to get container status \"65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79\": rpc error: code = NotFound desc = could not find container \"65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79\": container with ID starting with 65851fdc0f68d9db3f1e3c4bb1ffb702190d8226707142065f25f216df031c79 not found: ID does not exist" Jan 27 14:41:29 crc kubenswrapper[4833]: I0127 14:41:29.047216 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fttzp"] Jan 27 14:41:29 crc kubenswrapper[4833]: I0127 14:41:29.056983 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fttzp"] Jan 27 14:41:29 crc kubenswrapper[4833]: I0127 14:41:29.232022 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d74d9d-a60f-4a0b-a3a8-55e91415f8ff" path="/var/lib/kubelet/pods/02d74d9d-a60f-4a0b-a3a8-55e91415f8ff/volumes" Jan 27 14:41:29 crc kubenswrapper[4833]: I0127 14:41:29.233294 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6437f63d-06ba-4163-a029-b2194c084535" path="/var/lib/kubelet/pods/6437f63d-06ba-4163-a029-b2194c084535/volumes" Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.040616 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1cea-account-create-update-mdk7b"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.051034 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xnnkw"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.066533 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-10b4-account-create-update-hjwjt"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.078048 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xnnkw"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.089698 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5915-account-create-update-wd9r6"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.100503 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1cea-account-create-update-mdk7b"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.113585 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-10b4-account-create-update-hjwjt"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.121842 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d4xbb"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.130199 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d4xbb"] Jan 27 14:41:30 crc kubenswrapper[4833]: I0127 14:41:30.139104 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5915-account-create-update-wd9r6"] Jan 27 14:41:31 crc kubenswrapper[4833]: I0127 14:41:31.225259 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200d49d1-840e-40d8-b347-f0603e5c5e40" path="/var/lib/kubelet/pods/200d49d1-840e-40d8-b347-f0603e5c5e40/volumes" Jan 27 14:41:31 crc kubenswrapper[4833]: I0127 14:41:31.226397 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4cf139-86d7-47e5-aba6-ae965bc89ed8" path="/var/lib/kubelet/pods/2c4cf139-86d7-47e5-aba6-ae965bc89ed8/volumes" Jan 27 14:41:31 crc kubenswrapper[4833]: I0127 14:41:31.227005 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af7b247-ce6a-494a-97d5-1d21afcf7727" path="/var/lib/kubelet/pods/3af7b247-ce6a-494a-97d5-1d21afcf7727/volumes" Jan 27 14:41:31 crc kubenswrapper[4833]: I0127 14:41:31.227547 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7" path="/var/lib/kubelet/pods/4a6d9a3a-b0b6-482b-bdc4-cb38b5d75bf7/volumes" Jan 27 14:41:31 crc kubenswrapper[4833]: I0127 14:41:31.228704 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca7faff-15ba-4ec9-b034-8149ff5d4fd4" path="/var/lib/kubelet/pods/dca7faff-15ba-4ec9-b034-8149ff5d4fd4/volumes" Jan 27 14:41:36 crc kubenswrapper[4833]: I0127 14:41:36.211067 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:41:36 crc kubenswrapper[4833]: E0127 14:41:36.211976 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:41:47 crc kubenswrapper[4833]: I0127 14:41:47.211659 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:41:47 crc kubenswrapper[4833]: E0127 14:41:47.213089 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:41:51 crc kubenswrapper[4833]: I0127 14:41:51.034802 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-26wng"] Jan 27 14:41:51 crc kubenswrapper[4833]: I0127 14:41:51.045193 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-26wng"] Jan 27 14:41:51 crc kubenswrapper[4833]: I0127 14:41:51.226675 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d497f9-964a-4818-9f39-09cf9a0f83fb" path="/var/lib/kubelet/pods/69d497f9-964a-4818-9f39-09cf9a0f83fb/volumes" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.516050 4833 scope.go:117] "RemoveContainer" containerID="7cbf4a9ba25601a662054bebcb7956e9ac06e51f9392f8953e54d8d8c7f9391d" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.549941 4833 scope.go:117] "RemoveContainer" containerID="8bfd63e926ead44687b1817e5823aa6ea5463dc9d9351b772cb16edcd7b17bfd" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.650793 4833 scope.go:117] "RemoveContainer" containerID="c6cc17b5a899cb7e366350072839fd07d637bbff432f112a05dbd95618fdd6df" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.686511 4833 scope.go:117] "RemoveContainer" containerID="d069df7882429b82ff956a7fe3e3fc9d6730c736d8d69f57c3a40b3c584b6323" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.721175 4833 scope.go:117] "RemoveContainer" containerID="d7a5df6ff12967ff88e012505674edeb5aae81e2dd290644b249b89dd6e6827c" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.743733 4833 scope.go:117] "RemoveContainer" containerID="551f9bd745798fe95f351bf4bfcf28007f82fec9bfe4c3405c7ea0592908e1d0" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.803547 4833 scope.go:117] "RemoveContainer" containerID="7560542a84023f545c1655f12135aed8852b69e3fff7781a0f97a663934a85c0" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.861518 4833 scope.go:117] "RemoveContainer" containerID="d43ced6ff834553886ed37456bb08f98e09f184342a44034177188e0c002abae" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.896843 4833 scope.go:117] "RemoveContainer" containerID="e66e256ccdd51a9fce43ee37229423be740387897b4d5fdfcce5c34e0fffe1d4" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.933733 4833 scope.go:117] "RemoveContainer" containerID="249645aaf172f80e19446a27b2161e968bb22165b8a94835fb8438b8a8ee2644" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.961289 4833 scope.go:117] "RemoveContainer" containerID="36ae9dc88c2276ca5b16d737f2a22af2c41f5181acbd5ea170dc751bef76ff98" Jan 27 14:41:53 crc kubenswrapper[4833]: I0127 14:41:53.989431 4833 scope.go:117] "RemoveContainer" containerID="e06a27e5305b638e65449979d92fcfe5c1917c3eeb49841d00c6553c1e152fa3" Jan 27 14:41:54 crc kubenswrapper[4833]: I0127 14:41:54.016635 4833 scope.go:117] "RemoveContainer" containerID="aa30d4e02508c4d2fd8ca082f9368ae6afac346d7b4cae961ef2623329e42894" Jan 27 14:41:54 crc kubenswrapper[4833]: I0127 14:41:54.043242 4833 scope.go:117] "RemoveContainer" containerID="8b051c4c1520a89e27e2ef667fbe83134f48bb3774dd8081fa80a9a3986c211f" Jan 27 14:41:54 crc kubenswrapper[4833]: I0127 14:41:54.068762 4833 scope.go:117] "RemoveContainer" containerID="1eb706d0dea0795b43a1bcd763982f478e393522eefbf40f969c257db070f138" Jan 27 14:41:54 crc kubenswrapper[4833]: I0127 14:41:54.089426 4833 scope.go:117] "RemoveContainer" containerID="cd9944f75ddb37baf47cf4bc0e105e06506e50c13e35be010f15df06fcc6fabf" Jan 27 14:41:54 crc kubenswrapper[4833]: I0127 14:41:54.110339 4833 scope.go:117] "RemoveContainer" containerID="95310040e5d8f55d783ab342132e7b92bc5a9500abfe3f99bd11242874fee475" Jan 27 14:41:54 crc kubenswrapper[4833]: I0127 14:41:54.134929 4833 scope.go:117] "RemoveContainer" containerID="973e1ab5f561584a4835e0fb84bcb4f7e86dc32694b42b07ca4067a87b4df2f4" Jan 27 14:41:56 crc kubenswrapper[4833]: I0127 14:41:56.048635 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-q6v7w"] Jan 27 14:41:56 crc kubenswrapper[4833]: I0127 14:41:56.064404 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-q6v7w"] Jan 27 14:41:57 crc kubenswrapper[4833]: I0127 14:41:57.232793 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb117e6-e4b5-4577-9af1-c3b385c4f23d" path="/var/lib/kubelet/pods/3cb117e6-e4b5-4577-9af1-c3b385c4f23d/volumes" Jan 27 14:42:00 crc kubenswrapper[4833]: I0127 14:42:00.211119 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:42:00 crc kubenswrapper[4833]: E0127 14:42:00.211685 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:42:13 crc kubenswrapper[4833]: I0127 14:42:13.210883 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:42:13 crc kubenswrapper[4833]: E0127 14:42:13.211958 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:42:27 crc kubenswrapper[4833]: I0127 14:42:27.211995 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:42:27 crc kubenswrapper[4833]: E0127 14:42:27.213186 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:42:33 crc kubenswrapper[4833]: I0127 14:42:33.054394 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mtnxp"] Jan 27 14:42:33 crc kubenswrapper[4833]: I0127 14:42:33.073660 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mtnxp"] Jan 27 14:42:33 crc kubenswrapper[4833]: I0127 14:42:33.230723 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b" path="/var/lib/kubelet/pods/5c0d1a14-f535-4a9d-92c9-794d3e3c7a4b/volumes" Jan 27 14:42:40 crc kubenswrapper[4833]: I0127 14:42:40.210693 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:42:40 crc kubenswrapper[4833]: E0127 14:42:40.211389 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.058376 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-wchjj"] Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.070267 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-86px7"] Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.085481 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-wchjj"] Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.093527 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-86px7"] Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.102207 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6dqbw"] Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.109547 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6dqbw"] Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.225594 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df071dc-eb5b-40dd-85ea-430f44ab198f" path="/var/lib/kubelet/pods/3df071dc-eb5b-40dd-85ea-430f44ab198f/volumes" Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.227409 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a57c811-cef6-458c-bb52-ef9e0861e39a" path="/var/lib/kubelet/pods/5a57c811-cef6-458c-bb52-ef9e0861e39a/volumes" Jan 27 14:42:45 crc kubenswrapper[4833]: I0127 14:42:45.228038 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a324f832-7082-443a-87c7-3cef46ebe7ea" path="/var/lib/kubelet/pods/a324f832-7082-443a-87c7-3cef46ebe7ea/volumes" Jan 27 14:42:50 crc kubenswrapper[4833]: I0127 14:42:50.049299 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cjff8"] Jan 27 14:42:50 crc kubenswrapper[4833]: I0127 14:42:50.068219 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cjff8"] Jan 27 14:42:51 crc kubenswrapper[4833]: I0127 14:42:51.224693 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32678252-925f-4f5c-9602-5409032b6063" path="/var/lib/kubelet/pods/32678252-925f-4f5c-9602-5409032b6063/volumes" Jan 27 14:42:52 crc kubenswrapper[4833]: I0127 14:42:52.210474 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:42:52 crc kubenswrapper[4833]: E0127 14:42:52.211115 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:42:54 crc kubenswrapper[4833]: I0127 14:42:54.496380 4833 scope.go:117] "RemoveContainer" containerID="51fd440fc6d895911812687450be70452d3f42f42f6ab5fcf444d1029a81fbe7" Jan 27 14:42:54 crc kubenswrapper[4833]: I0127 14:42:54.551546 4833 scope.go:117] "RemoveContainer" containerID="a1ed3db3a6ce62035fd7c48adee237998df1d76a854be80c61fb77305399c6bd" Jan 27 14:42:54 crc kubenswrapper[4833]: I0127 14:42:54.627110 4833 scope.go:117] "RemoveContainer" containerID="ee90a0f8f7ea56c5be43e04a44222b8834b06f54acba33a13be9e121129d6afe" Jan 27 14:42:54 crc kubenswrapper[4833]: I0127 14:42:54.666319 4833 scope.go:117] "RemoveContainer" containerID="71017171e2d70f8a3d9129c76f0441d479d83bb8220f8380b3b6cf9f10ec7a33" Jan 27 14:42:54 crc kubenswrapper[4833]: I0127 14:42:54.721144 4833 scope.go:117] "RemoveContainer" containerID="cf0973c23f500358c49a8dadf69c6da79910b7909dc76dc24c87aa2b1df39b81" Jan 27 14:42:54 crc kubenswrapper[4833]: I0127 14:42:54.763903 4833 scope.go:117] "RemoveContainer" containerID="1dcc72e263004f90472e17961cc5e35d3a9772dcdd013709387dcca812a62fd5" Jan 27 14:42:56 crc kubenswrapper[4833]: I0127 14:42:56.032416 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ht8p9"] Jan 27 14:42:56 crc kubenswrapper[4833]: I0127 14:42:56.044176 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ht8p9"] Jan 27 14:42:57 crc kubenswrapper[4833]: I0127 14:42:57.221518 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93af5fb-6812-4f30-9e89-e8c58b01a69e" path="/var/lib/kubelet/pods/a93af5fb-6812-4f30-9e89-e8c58b01a69e/volumes" Jan 27 14:42:57 crc kubenswrapper[4833]: I0127 14:42:57.537729 4833 generic.go:334] "Generic (PLEG): container finished" podID="cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" containerID="42ee57f514419bb27a02e50907e2e9803da7bc540e9584ab7f69672a17153344" exitCode=0 Jan 27 14:42:57 crc kubenswrapper[4833]: I0127 14:42:57.537772 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" event={"ID":"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68","Type":"ContainerDied","Data":"42ee57f514419bb27a02e50907e2e9803da7bc540e9584ab7f69672a17153344"} Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.033616 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.062301 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwgmw\" (UniqueName: \"kubernetes.io/projected/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-kube-api-access-gwgmw\") pod \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.062407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-ssh-key-openstack-edpm-ipam\") pod \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.062587 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-inventory\") pod \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\" (UID: \"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68\") " Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.068925 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-kube-api-access-gwgmw" (OuterVolumeSpecName: "kube-api-access-gwgmw") pod "cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" (UID: "cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68"). InnerVolumeSpecName "kube-api-access-gwgmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.107475 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" (UID: "cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.110163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-inventory" (OuterVolumeSpecName: "inventory") pod "cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" (UID: "cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.164314 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwgmw\" (UniqueName: \"kubernetes.io/projected/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-kube-api-access-gwgmw\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.164341 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.164351 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.577681 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" event={"ID":"cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68","Type":"ContainerDied","Data":"43b1f0825a3d826b1b47e6a6eaa31218de5169307b5834ad1677e853ddd892bd"} Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.578034 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b1f0825a3d826b1b47e6a6eaa31218de5169307b5834ad1677e853ddd892bd" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.578138 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.646503 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p"] Jan 27 14:42:59 crc kubenswrapper[4833]: E0127 14:42:59.646896 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="extract-utilities" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.646915 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="extract-utilities" Jan 27 14:42:59 crc kubenswrapper[4833]: E0127 14:42:59.646927 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.646936 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:42:59 crc kubenswrapper[4833]: E0127 14:42:59.646953 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="registry-server" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.646958 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="registry-server" Jan 27 14:42:59 crc kubenswrapper[4833]: E0127 14:42:59.646975 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="extract-content" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.646981 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="extract-content" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.647148 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.647171 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6437f63d-06ba-4163-a029-b2194c084535" containerName="registry-server" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.647905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.649653 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.649736 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.649891 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.650241 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.690389 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p"] Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.791630 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.791683 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxblt\" (UniqueName: \"kubernetes.io/projected/aab88e93-fea0-4209-9335-b3ce6714babc-kube-api-access-bxblt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.792273 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.894217 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.894268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxblt\" (UniqueName: \"kubernetes.io/projected/aab88e93-fea0-4209-9335-b3ce6714babc-kube-api-access-bxblt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.894359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.900346 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.901753 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.922219 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxblt\" (UniqueName: \"kubernetes.io/projected/aab88e93-fea0-4209-9335-b3ce6714babc-kube-api-access-bxblt\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:42:59 crc kubenswrapper[4833]: I0127 14:42:59.966375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:43:00 crc kubenswrapper[4833]: I0127 14:43:00.612562 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p"] Jan 27 14:43:01 crc kubenswrapper[4833]: I0127 14:43:01.601695 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" event={"ID":"aab88e93-fea0-4209-9335-b3ce6714babc","Type":"ContainerStarted","Data":"5149bfa414d533b8963649d1d1e841e1b9b791f2fbf8bfd186af6e7ada4036bb"} Jan 27 14:43:02 crc kubenswrapper[4833]: I0127 14:43:02.611866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" event={"ID":"aab88e93-fea0-4209-9335-b3ce6714babc","Type":"ContainerStarted","Data":"955316a190d358b1731a9895d2fe6b788adec3f216b9139b4a490b1149b515b2"} Jan 27 14:43:02 crc kubenswrapper[4833]: I0127 14:43:02.640591 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" podStartSLOduration=2.878289604 podStartE2EDuration="3.640571434s" podCreationTimestamp="2026-01-27 14:42:59 +0000 UTC" firstStartedPulling="2026-01-27 14:43:00.621778873 +0000 UTC m=+1882.273103275" lastFinishedPulling="2026-01-27 14:43:01.384060693 +0000 UTC m=+1883.035385105" observedRunningTime="2026-01-27 14:43:02.634171189 +0000 UTC m=+1884.285495601" watchObservedRunningTime="2026-01-27 14:43:02.640571434 +0000 UTC m=+1884.291895836" Jan 27 14:43:06 crc kubenswrapper[4833]: I0127 14:43:06.210826 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:43:06 crc kubenswrapper[4833]: I0127 14:43:06.660917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"b93fa008204f075160dd00a1ffe7bef2d616079e42f06d1e6e87a728b29b2ba0"} Jan 27 14:43:42 crc kubenswrapper[4833]: I0127 14:43:42.045354 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-v8nhd"] Jan 27 14:43:42 crc kubenswrapper[4833]: I0127 14:43:42.054072 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cgrlm"] Jan 27 14:43:42 crc kubenswrapper[4833]: I0127 14:43:42.064583 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8f54-account-create-update-hsskq"] Jan 27 14:43:42 crc kubenswrapper[4833]: I0127 14:43:42.072325 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-v8nhd"] Jan 27 14:43:42 crc kubenswrapper[4833]: I0127 14:43:42.080913 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cgrlm"] Jan 27 14:43:42 crc kubenswrapper[4833]: I0127 14:43:42.093543 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8f54-account-create-update-hsskq"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.043514 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9458-account-create-update-2ndkv"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.059493 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1995-account-create-update-7x427"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.067916 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-czh77"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.075019 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1995-account-create-update-7x427"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.082887 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9458-account-create-update-2ndkv"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.089600 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-czh77"] Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.231656 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ce025e-318d-4abf-accf-2a7a35d7ec0b" path="/var/lib/kubelet/pods/03ce025e-318d-4abf-accf-2a7a35d7ec0b/volumes" Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.232254 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355b8f4b-7773-4472-a1a9-84ee61cba511" path="/var/lib/kubelet/pods/355b8f4b-7773-4472-a1a9-84ee61cba511/volumes" Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.232822 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca61e50-dbbc-4e99-ad42-9f769a410a6d" path="/var/lib/kubelet/pods/7ca61e50-dbbc-4e99-ad42-9f769a410a6d/volumes" Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.233373 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b245df-2c38-4337-9a1b-41c31fc88e1c" path="/var/lib/kubelet/pods/89b245df-2c38-4337-9a1b-41c31fc88e1c/volumes" Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.234359 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db269ee5-31fd-4d2a-83db-abe3047254fd" path="/var/lib/kubelet/pods/db269ee5-31fd-4d2a-83db-abe3047254fd/volumes" Jan 27 14:43:43 crc kubenswrapper[4833]: I0127 14:43:43.234905 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53" path="/var/lib/kubelet/pods/fb30e6b9-8bcf-4e87-9642-9ed79d5a6e53/volumes" Jan 27 14:43:54 crc kubenswrapper[4833]: I0127 14:43:54.913714 4833 scope.go:117] "RemoveContainer" containerID="0544b1ee1eaafdb570f60f30f7df478a62aef6caa0229e215abd3394a3d19a72" Jan 27 14:43:54 crc kubenswrapper[4833]: I0127 14:43:54.953862 4833 scope.go:117] "RemoveContainer" containerID="bb41b043dfc8781773cbe0cd7dff2cc915090964c29161bcdf1c51574ace54d8" Jan 27 14:43:54 crc kubenswrapper[4833]: I0127 14:43:54.995586 4833 scope.go:117] "RemoveContainer" containerID="bd35fdcfe1eb54d2cdfea6987fbe76aeb3539e2fa41f22c279b1bd1b1822ee97" Jan 27 14:43:55 crc kubenswrapper[4833]: I0127 14:43:55.038200 4833 scope.go:117] "RemoveContainer" containerID="282f66dc67ecfafc95ab4c5d9ac8ef292a19d369e4f7c3c12d0ba9978fba5924" Jan 27 14:43:55 crc kubenswrapper[4833]: I0127 14:43:55.076723 4833 scope.go:117] "RemoveContainer" containerID="52fe7c1211d96fe83b1cb5a61cdf3c7f30d0eca60aa629b378e35821a1184556" Jan 27 14:43:55 crc kubenswrapper[4833]: I0127 14:43:55.136491 4833 scope.go:117] "RemoveContainer" containerID="3e5abe434e9b737f9d7af793cd27e1b2c513456790c8f9d7758d13757432594f" Jan 27 14:43:55 crc kubenswrapper[4833]: I0127 14:43:55.166060 4833 scope.go:117] "RemoveContainer" containerID="67b4a282153dd24a6e7803e64eb4f9c35bc5b57a2b224d1b50094194025b0bb2" Jan 27 14:44:11 crc kubenswrapper[4833]: I0127 14:44:11.056627 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bnbkx"] Jan 27 14:44:11 crc kubenswrapper[4833]: I0127 14:44:11.067306 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bnbkx"] Jan 27 14:44:11 crc kubenswrapper[4833]: I0127 14:44:11.223020 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8eed79-9bd3-4b27-b185-10d0f449e158" path="/var/lib/kubelet/pods/7b8eed79-9bd3-4b27-b185-10d0f449e158/volumes" Jan 27 14:44:16 crc kubenswrapper[4833]: I0127 14:44:16.701236 4833 generic.go:334] "Generic (PLEG): container finished" podID="aab88e93-fea0-4209-9335-b3ce6714babc" containerID="955316a190d358b1731a9895d2fe6b788adec3f216b9139b4a490b1149b515b2" exitCode=0 Jan 27 14:44:16 crc kubenswrapper[4833]: I0127 14:44:16.701361 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" event={"ID":"aab88e93-fea0-4209-9335-b3ce6714babc","Type":"ContainerDied","Data":"955316a190d358b1731a9895d2fe6b788adec3f216b9139b4a490b1149b515b2"} Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.211247 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.292826 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxblt\" (UniqueName: \"kubernetes.io/projected/aab88e93-fea0-4209-9335-b3ce6714babc-kube-api-access-bxblt\") pod \"aab88e93-fea0-4209-9335-b3ce6714babc\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.292939 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-inventory\") pod \"aab88e93-fea0-4209-9335-b3ce6714babc\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.293013 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-ssh-key-openstack-edpm-ipam\") pod \"aab88e93-fea0-4209-9335-b3ce6714babc\" (UID: \"aab88e93-fea0-4209-9335-b3ce6714babc\") " Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.301530 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab88e93-fea0-4209-9335-b3ce6714babc-kube-api-access-bxblt" (OuterVolumeSpecName: "kube-api-access-bxblt") pod "aab88e93-fea0-4209-9335-b3ce6714babc" (UID: "aab88e93-fea0-4209-9335-b3ce6714babc"). InnerVolumeSpecName "kube-api-access-bxblt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.322820 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-inventory" (OuterVolumeSpecName: "inventory") pod "aab88e93-fea0-4209-9335-b3ce6714babc" (UID: "aab88e93-fea0-4209-9335-b3ce6714babc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.327702 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aab88e93-fea0-4209-9335-b3ce6714babc" (UID: "aab88e93-fea0-4209-9335-b3ce6714babc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.396191 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.396235 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aab88e93-fea0-4209-9335-b3ce6714babc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.396251 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxblt\" (UniqueName: \"kubernetes.io/projected/aab88e93-fea0-4209-9335-b3ce6714babc-kube-api-access-bxblt\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.719604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" event={"ID":"aab88e93-fea0-4209-9335-b3ce6714babc","Type":"ContainerDied","Data":"5149bfa414d533b8963649d1d1e841e1b9b791f2fbf8bfd186af6e7ada4036bb"} Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.719648 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.719661 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5149bfa414d533b8963649d1d1e841e1b9b791f2fbf8bfd186af6e7ada4036bb" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.807578 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks"] Jan 27 14:44:18 crc kubenswrapper[4833]: E0127 14:44:18.807977 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab88e93-fea0-4209-9335-b3ce6714babc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.807993 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab88e93-fea0-4209-9335-b3ce6714babc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.808184 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab88e93-fea0-4209-9335-b3ce6714babc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.808858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.811042 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.811245 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.812190 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.817350 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.821247 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks"] Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.905723 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l99s\" (UniqueName: \"kubernetes.io/projected/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-kube-api-access-5l99s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.906023 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:18 crc kubenswrapper[4833]: I0127 14:44:18.906086 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.007829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l99s\" (UniqueName: \"kubernetes.io/projected/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-kube-api-access-5l99s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.008238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.008278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.015036 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.015893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.028457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l99s\" (UniqueName: \"kubernetes.io/projected/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-kube-api-access-5l99s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-899ks\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.135979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.571075 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks"] Jan 27 14:44:19 crc kubenswrapper[4833]: I0127 14:44:19.731985 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" event={"ID":"6b8c7eee-cf33-40b8-82b9-e88287b52d3a","Type":"ContainerStarted","Data":"4bad864c7392092616a05379e23f07314a188ae46182323378bcf7cce40631af"} Jan 27 14:44:20 crc kubenswrapper[4833]: I0127 14:44:20.744883 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" event={"ID":"6b8c7eee-cf33-40b8-82b9-e88287b52d3a","Type":"ContainerStarted","Data":"b2a37b3e3c9a3e85cc9b84e71639edcde611e349342d41d5eaff61dda383d309"} Jan 27 14:44:20 crc kubenswrapper[4833]: I0127 14:44:20.775567 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" podStartSLOduration=2.363263859 podStartE2EDuration="2.775548183s" podCreationTimestamp="2026-01-27 14:44:18 +0000 UTC" firstStartedPulling="2026-01-27 14:44:19.575900539 +0000 UTC m=+1961.227224941" lastFinishedPulling="2026-01-27 14:44:19.988184863 +0000 UTC m=+1961.639509265" observedRunningTime="2026-01-27 14:44:20.771761391 +0000 UTC m=+1962.423085793" watchObservedRunningTime="2026-01-27 14:44:20.775548183 +0000 UTC m=+1962.426872585" Jan 27 14:44:25 crc kubenswrapper[4833]: I0127 14:44:25.787903 4833 generic.go:334] "Generic (PLEG): container finished" podID="6b8c7eee-cf33-40b8-82b9-e88287b52d3a" containerID="b2a37b3e3c9a3e85cc9b84e71639edcde611e349342d41d5eaff61dda383d309" exitCode=0 Jan 27 14:44:25 crc kubenswrapper[4833]: I0127 14:44:25.788021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" event={"ID":"6b8c7eee-cf33-40b8-82b9-e88287b52d3a","Type":"ContainerDied","Data":"b2a37b3e3c9a3e85cc9b84e71639edcde611e349342d41d5eaff61dda383d309"} Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.206599 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.277874 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-inventory\") pod \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.277926 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-ssh-key-openstack-edpm-ipam\") pod \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.277963 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l99s\" (UniqueName: \"kubernetes.io/projected/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-kube-api-access-5l99s\") pod \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\" (UID: \"6b8c7eee-cf33-40b8-82b9-e88287b52d3a\") " Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.285833 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-kube-api-access-5l99s" (OuterVolumeSpecName: "kube-api-access-5l99s") pod "6b8c7eee-cf33-40b8-82b9-e88287b52d3a" (UID: "6b8c7eee-cf33-40b8-82b9-e88287b52d3a"). InnerVolumeSpecName "kube-api-access-5l99s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.315673 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-inventory" (OuterVolumeSpecName: "inventory") pod "6b8c7eee-cf33-40b8-82b9-e88287b52d3a" (UID: "6b8c7eee-cf33-40b8-82b9-e88287b52d3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.340030 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6b8c7eee-cf33-40b8-82b9-e88287b52d3a" (UID: "6b8c7eee-cf33-40b8-82b9-e88287b52d3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.379732 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.379774 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.379789 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l99s\" (UniqueName: \"kubernetes.io/projected/6b8c7eee-cf33-40b8-82b9-e88287b52d3a-kube-api-access-5l99s\") on node \"crc\" DevicePath \"\"" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.812793 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" event={"ID":"6b8c7eee-cf33-40b8-82b9-e88287b52d3a","Type":"ContainerDied","Data":"4bad864c7392092616a05379e23f07314a188ae46182323378bcf7cce40631af"} Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.813111 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-899ks" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.813132 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bad864c7392092616a05379e23f07314a188ae46182323378bcf7cce40631af" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.900775 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96"] Jan 27 14:44:27 crc kubenswrapper[4833]: E0127 14:44:27.901290 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8c7eee-cf33-40b8-82b9-e88287b52d3a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.901312 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8c7eee-cf33-40b8-82b9-e88287b52d3a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.901585 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8c7eee-cf33-40b8-82b9-e88287b52d3a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.902509 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.907554 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.910033 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.910181 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.910237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.915241 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96"] Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.988364 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drsx6\" (UniqueName: \"kubernetes.io/projected/6c84c399-b62e-4a5d-94b1-b5e186b20a93-kube-api-access-drsx6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.988701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:27 crc kubenswrapper[4833]: I0127 14:44:27.988887 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.090220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.090403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drsx6\" (UniqueName: \"kubernetes.io/projected/6c84c399-b62e-4a5d-94b1-b5e186b20a93-kube-api-access-drsx6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.090495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.096025 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.096034 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.110978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drsx6\" (UniqueName: \"kubernetes.io/projected/6c84c399-b62e-4a5d-94b1-b5e186b20a93-kube-api-access-drsx6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g4k96\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.222139 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:44:28 crc kubenswrapper[4833]: I0127 14:44:28.815917 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96"] Jan 27 14:44:29 crc kubenswrapper[4833]: I0127 14:44:29.828992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" event={"ID":"6c84c399-b62e-4a5d-94b1-b5e186b20a93","Type":"ContainerStarted","Data":"ad9b020b4999af68da271e4e47af568e98f0187ea631604cd68b721f67c6dea0"} Jan 27 14:44:29 crc kubenswrapper[4833]: I0127 14:44:29.829200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" event={"ID":"6c84c399-b62e-4a5d-94b1-b5e186b20a93","Type":"ContainerStarted","Data":"47aa79fc5e609ecc07d19e070edceb57cef6fb4ec96419e04b1de09f4557da7b"} Jan 27 14:44:29 crc kubenswrapper[4833]: I0127 14:44:29.850905 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" podStartSLOduration=2.378138114 podStartE2EDuration="2.8508887s" podCreationTimestamp="2026-01-27 14:44:27 +0000 UTC" firstStartedPulling="2026-01-27 14:44:28.818301674 +0000 UTC m=+1970.469626076" lastFinishedPulling="2026-01-27 14:44:29.29105222 +0000 UTC m=+1970.942376662" observedRunningTime="2026-01-27 14:44:29.847879487 +0000 UTC m=+1971.499203889" watchObservedRunningTime="2026-01-27 14:44:29.8508887 +0000 UTC m=+1971.502213102" Jan 27 14:44:39 crc kubenswrapper[4833]: I0127 14:44:39.057925 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gspd9"] Jan 27 14:44:39 crc kubenswrapper[4833]: I0127 14:44:39.065895 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gspd9"] Jan 27 14:44:39 crc kubenswrapper[4833]: I0127 14:44:39.228186 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14aee9b7-55c7-4bcd-b79c-e39e491c4d5d" path="/var/lib/kubelet/pods/14aee9b7-55c7-4bcd-b79c-e39e491c4d5d/volumes" Jan 27 14:44:45 crc kubenswrapper[4833]: I0127 14:44:45.032970 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-76xtr"] Jan 27 14:44:45 crc kubenswrapper[4833]: I0127 14:44:45.044821 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-76xtr"] Jan 27 14:44:45 crc kubenswrapper[4833]: I0127 14:44:45.228777 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de97b8b-2681-4c26-83d3-63e5ad9eee7b" path="/var/lib/kubelet/pods/8de97b8b-2681-4c26-83d3-63e5ad9eee7b/volumes" Jan 27 14:44:55 crc kubenswrapper[4833]: I0127 14:44:55.335119 4833 scope.go:117] "RemoveContainer" containerID="70a76e7c890080c0b4c055b53088525545528bcdb73af41a24f3059c18fc0092" Jan 27 14:44:55 crc kubenswrapper[4833]: I0127 14:44:55.394803 4833 scope.go:117] "RemoveContainer" containerID="284738e50d0903b413f5bb513dd88cafaffcc083a5c9f42d6a44b70c0bf5f752" Jan 27 14:44:55 crc kubenswrapper[4833]: I0127 14:44:55.466626 4833 scope.go:117] "RemoveContainer" containerID="82a4158d2509f18dda68bdb8dc457083323a5e1849a68915f75d0af07001d40b" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.157427 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd"] Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.159937 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.164504 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.166204 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.169364 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd"] Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.191991 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74k7\" (UniqueName: \"kubernetes.io/projected/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-kube-api-access-z74k7\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.192081 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-secret-volume\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.192143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-config-volume\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.294266 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74k7\" (UniqueName: \"kubernetes.io/projected/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-kube-api-access-z74k7\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.294356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-secret-volume\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.294390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-config-volume\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.295686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-config-volume\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.300396 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-secret-volume\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.313270 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74k7\" (UniqueName: \"kubernetes.io/projected/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-kube-api-access-z74k7\") pod \"collect-profiles-29492085-v5pbd\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:00 crc kubenswrapper[4833]: I0127 14:45:00.491530 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:01 crc kubenswrapper[4833]: I0127 14:45:01.021606 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd"] Jan 27 14:45:01 crc kubenswrapper[4833]: I0127 14:45:01.146404 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" event={"ID":"e51cf794-1fe2-4bdc-ac6e-15e5174d0545","Type":"ContainerStarted","Data":"c09eaff8924ec0bfac7a2c673c4176f00c0a89446e08ae30250dfb516cdf9756"} Jan 27 14:45:02 crc kubenswrapper[4833]: I0127 14:45:02.159160 4833 generic.go:334] "Generic (PLEG): container finished" podID="e51cf794-1fe2-4bdc-ac6e-15e5174d0545" containerID="8b67e3f17764465ecc821aab87a2720fd96ce8cb3e467e419b160736d4042c38" exitCode=0 Jan 27 14:45:02 crc kubenswrapper[4833]: I0127 14:45:02.159205 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" event={"ID":"e51cf794-1fe2-4bdc-ac6e-15e5174d0545","Type":"ContainerDied","Data":"8b67e3f17764465ecc821aab87a2720fd96ce8cb3e467e419b160736d4042c38"} Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.571144 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.767877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-config-volume\") pod \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.768138 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-secret-volume\") pod \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.768191 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z74k7\" (UniqueName: \"kubernetes.io/projected/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-kube-api-access-z74k7\") pod \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\" (UID: \"e51cf794-1fe2-4bdc-ac6e-15e5174d0545\") " Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.769140 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-config-volume" (OuterVolumeSpecName: "config-volume") pod "e51cf794-1fe2-4bdc-ac6e-15e5174d0545" (UID: "e51cf794-1fe2-4bdc-ac6e-15e5174d0545"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.774825 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e51cf794-1fe2-4bdc-ac6e-15e5174d0545" (UID: "e51cf794-1fe2-4bdc-ac6e-15e5174d0545"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.775603 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-kube-api-access-z74k7" (OuterVolumeSpecName: "kube-api-access-z74k7") pod "e51cf794-1fe2-4bdc-ac6e-15e5174d0545" (UID: "e51cf794-1fe2-4bdc-ac6e-15e5174d0545"). InnerVolumeSpecName "kube-api-access-z74k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.871005 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.871048 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:03 crc kubenswrapper[4833]: I0127 14:45:03.871058 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z74k7\" (UniqueName: \"kubernetes.io/projected/e51cf794-1fe2-4bdc-ac6e-15e5174d0545-kube-api-access-z74k7\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4833]: I0127 14:45:04.186572 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" event={"ID":"e51cf794-1fe2-4bdc-ac6e-15e5174d0545","Type":"ContainerDied","Data":"c09eaff8924ec0bfac7a2c673c4176f00c0a89446e08ae30250dfb516cdf9756"} Jan 27 14:45:04 crc kubenswrapper[4833]: I0127 14:45:04.186849 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09eaff8924ec0bfac7a2c673c4176f00c0a89446e08ae30250dfb516cdf9756" Jan 27 14:45:04 crc kubenswrapper[4833]: I0127 14:45:04.186611 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd" Jan 27 14:45:04 crc kubenswrapper[4833]: I0127 14:45:04.644168 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn"] Jan 27 14:45:04 crc kubenswrapper[4833]: I0127 14:45:04.651954 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-5qwsn"] Jan 27 14:45:05 crc kubenswrapper[4833]: I0127 14:45:05.228348 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902663e2-9d1b-47a2-af8b-fcd67c717b70" path="/var/lib/kubelet/pods/902663e2-9d1b-47a2-af8b-fcd67c717b70/volumes" Jan 27 14:45:10 crc kubenswrapper[4833]: I0127 14:45:10.267991 4833 generic.go:334] "Generic (PLEG): container finished" podID="6c84c399-b62e-4a5d-94b1-b5e186b20a93" containerID="ad9b020b4999af68da271e4e47af568e98f0187ea631604cd68b721f67c6dea0" exitCode=0 Jan 27 14:45:10 crc kubenswrapper[4833]: I0127 14:45:10.268094 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" event={"ID":"6c84c399-b62e-4a5d-94b1-b5e186b20a93","Type":"ContainerDied","Data":"ad9b020b4999af68da271e4e47af568e98f0187ea631604cd68b721f67c6dea0"} Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.763021 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.936811 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drsx6\" (UniqueName: \"kubernetes.io/projected/6c84c399-b62e-4a5d-94b1-b5e186b20a93-kube-api-access-drsx6\") pod \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.937054 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-ssh-key-openstack-edpm-ipam\") pod \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.937154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-inventory\") pod \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\" (UID: \"6c84c399-b62e-4a5d-94b1-b5e186b20a93\") " Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.942170 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c84c399-b62e-4a5d-94b1-b5e186b20a93-kube-api-access-drsx6" (OuterVolumeSpecName: "kube-api-access-drsx6") pod "6c84c399-b62e-4a5d-94b1-b5e186b20a93" (UID: "6c84c399-b62e-4a5d-94b1-b5e186b20a93"). InnerVolumeSpecName "kube-api-access-drsx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.962724 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-inventory" (OuterVolumeSpecName: "inventory") pod "6c84c399-b62e-4a5d-94b1-b5e186b20a93" (UID: "6c84c399-b62e-4a5d-94b1-b5e186b20a93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:11 crc kubenswrapper[4833]: I0127 14:45:11.974237 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6c84c399-b62e-4a5d-94b1-b5e186b20a93" (UID: "6c84c399-b62e-4a5d-94b1-b5e186b20a93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.039619 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.039681 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drsx6\" (UniqueName: \"kubernetes.io/projected/6c84c399-b62e-4a5d-94b1-b5e186b20a93-kube-api-access-drsx6\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.039704 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c84c399-b62e-4a5d-94b1-b5e186b20a93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.293097 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" event={"ID":"6c84c399-b62e-4a5d-94b1-b5e186b20a93","Type":"ContainerDied","Data":"47aa79fc5e609ecc07d19e070edceb57cef6fb4ec96419e04b1de09f4557da7b"} Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.293147 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47aa79fc5e609ecc07d19e070edceb57cef6fb4ec96419e04b1de09f4557da7b" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.293216 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g4k96" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.415641 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw"] Jan 27 14:45:12 crc kubenswrapper[4833]: E0127 14:45:12.416186 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51cf794-1fe2-4bdc-ac6e-15e5174d0545" containerName="collect-profiles" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.416210 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51cf794-1fe2-4bdc-ac6e-15e5174d0545" containerName="collect-profiles" Jan 27 14:45:12 crc kubenswrapper[4833]: E0127 14:45:12.416237 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c84c399-b62e-4a5d-94b1-b5e186b20a93" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.416248 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c84c399-b62e-4a5d-94b1-b5e186b20a93" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.416541 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c84c399-b62e-4a5d-94b1-b5e186b20a93" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.416562 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51cf794-1fe2-4bdc-ac6e-15e5174d0545" containerName="collect-profiles" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.417537 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.420992 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.421087 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.421430 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.421527 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.428045 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw"] Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.557049 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jp7r\" (UniqueName: \"kubernetes.io/projected/6ebc1387-6b12-4592-86d6-92fe757cfd6b-kube-api-access-8jp7r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.557236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.557305 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.659354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.659429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.659564 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jp7r\" (UniqueName: \"kubernetes.io/projected/6ebc1387-6b12-4592-86d6-92fe757cfd6b-kube-api-access-8jp7r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.666500 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.666618 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.678084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jp7r\" (UniqueName: \"kubernetes.io/projected/6ebc1387-6b12-4592-86d6-92fe757cfd6b-kube-api-access-8jp7r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jswpw\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:12 crc kubenswrapper[4833]: I0127 14:45:12.741224 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:45:13 crc kubenswrapper[4833]: I0127 14:45:13.272163 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw"] Jan 27 14:45:13 crc kubenswrapper[4833]: I0127 14:45:13.303055 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" event={"ID":"6ebc1387-6b12-4592-86d6-92fe757cfd6b","Type":"ContainerStarted","Data":"7b76603126696ab80f3a35972b65d03da94fc7956efb3915b5f6271c13d0a6f3"} Jan 27 14:45:14 crc kubenswrapper[4833]: I0127 14:45:14.314856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" event={"ID":"6ebc1387-6b12-4592-86d6-92fe757cfd6b","Type":"ContainerStarted","Data":"c56f3478c547c4859da75e39df1e3ebe7ddfa0cdf927265a0c6f8a4d4d6af04f"} Jan 27 14:45:14 crc kubenswrapper[4833]: I0127 14:45:14.337461 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" podStartSLOduration=1.803392715 podStartE2EDuration="2.337418858s" podCreationTimestamp="2026-01-27 14:45:12 +0000 UTC" firstStartedPulling="2026-01-27 14:45:13.289301324 +0000 UTC m=+2014.940625726" lastFinishedPulling="2026-01-27 14:45:13.823327467 +0000 UTC m=+2015.474651869" observedRunningTime="2026-01-27 14:45:14.328874236 +0000 UTC m=+2015.980198638" watchObservedRunningTime="2026-01-27 14:45:14.337418858 +0000 UTC m=+2015.988743260" Jan 27 14:45:28 crc kubenswrapper[4833]: I0127 14:45:28.039512 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jpzsm"] Jan 27 14:45:28 crc kubenswrapper[4833]: I0127 14:45:28.048729 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jpzsm"] Jan 27 14:45:29 crc kubenswrapper[4833]: I0127 14:45:29.233377 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211af2ba-ae31-4ecd-9063-b277bffb42b7" path="/var/lib/kubelet/pods/211af2ba-ae31-4ecd-9063-b277bffb42b7/volumes" Jan 27 14:45:32 crc kubenswrapper[4833]: I0127 14:45:32.261293 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:45:32 crc kubenswrapper[4833]: I0127 14:45:32.262070 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.625059 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbzgb"] Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.630378 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.644696 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbzgb"] Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.721152 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-catalog-content\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.721580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qknpn\" (UniqueName: \"kubernetes.io/projected/d8b680c5-f155-47d1-ae09-418ed642cb6e-kube-api-access-qknpn\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.721677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-utilities\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.823849 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qknpn\" (UniqueName: \"kubernetes.io/projected/d8b680c5-f155-47d1-ae09-418ed642cb6e-kube-api-access-qknpn\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.823919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-utilities\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.824053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-catalog-content\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.824578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-utilities\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.824616 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-catalog-content\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.844088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qknpn\" (UniqueName: \"kubernetes.io/projected/d8b680c5-f155-47d1-ae09-418ed642cb6e-kube-api-access-qknpn\") pod \"redhat-operators-fbzgb\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:33 crc kubenswrapper[4833]: I0127 14:45:33.967992 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:34 crc kubenswrapper[4833]: I0127 14:45:34.426909 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbzgb"] Jan 27 14:45:34 crc kubenswrapper[4833]: I0127 14:45:34.563872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerStarted","Data":"0e1876942d93b90800cb33c5f1d78bc1aedf984acf79bada6db3816e87e35d59"} Jan 27 14:45:35 crc kubenswrapper[4833]: I0127 14:45:35.578339 4833 generic.go:334] "Generic (PLEG): container finished" podID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerID="d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6" exitCode=0 Jan 27 14:45:35 crc kubenswrapper[4833]: I0127 14:45:35.578499 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerDied","Data":"d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6"} Jan 27 14:45:36 crc kubenswrapper[4833]: I0127 14:45:36.594880 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerStarted","Data":"7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d"} Jan 27 14:45:38 crc kubenswrapper[4833]: I0127 14:45:38.619918 4833 generic.go:334] "Generic (PLEG): container finished" podID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerID="7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d" exitCode=0 Jan 27 14:45:38 crc kubenswrapper[4833]: I0127 14:45:38.619990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerDied","Data":"7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d"} Jan 27 14:45:40 crc kubenswrapper[4833]: I0127 14:45:40.643814 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerStarted","Data":"8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454"} Jan 27 14:45:40 crc kubenswrapper[4833]: I0127 14:45:40.674586 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbzgb" podStartSLOduration=3.4742539 podStartE2EDuration="7.674563941s" podCreationTimestamp="2026-01-27 14:45:33 +0000 UTC" firstStartedPulling="2026-01-27 14:45:35.581797097 +0000 UTC m=+2037.233121549" lastFinishedPulling="2026-01-27 14:45:39.782107158 +0000 UTC m=+2041.433431590" observedRunningTime="2026-01-27 14:45:40.667099063 +0000 UTC m=+2042.318423465" watchObservedRunningTime="2026-01-27 14:45:40.674563941 +0000 UTC m=+2042.325888363" Jan 27 14:45:43 crc kubenswrapper[4833]: I0127 14:45:43.969177 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:43 crc kubenswrapper[4833]: I0127 14:45:43.970603 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:45 crc kubenswrapper[4833]: I0127 14:45:45.021249 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fbzgb" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="registry-server" probeResult="failure" output=< Jan 27 14:45:45 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:45:45 crc kubenswrapper[4833]: > Jan 27 14:45:54 crc kubenswrapper[4833]: I0127 14:45:54.036680 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:54 crc kubenswrapper[4833]: I0127 14:45:54.100877 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:54 crc kubenswrapper[4833]: I0127 14:45:54.270327 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbzgb"] Jan 27 14:45:55 crc kubenswrapper[4833]: I0127 14:45:55.574522 4833 scope.go:117] "RemoveContainer" containerID="3c25a67d1993a9ec71f11e0b2f16b1ae7b21b3fd3a86528a3ae7709965ecb36c" Jan 27 14:45:55 crc kubenswrapper[4833]: I0127 14:45:55.643206 4833 scope.go:117] "RemoveContainer" containerID="97ccb2c62505785dd5d58879edef73f586f6368e2b5c7ea9850b522eac91ab0e" Jan 27 14:45:55 crc kubenswrapper[4833]: I0127 14:45:55.837260 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbzgb" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="registry-server" containerID="cri-o://8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454" gracePeriod=2 Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.263811 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.438006 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qknpn\" (UniqueName: \"kubernetes.io/projected/d8b680c5-f155-47d1-ae09-418ed642cb6e-kube-api-access-qknpn\") pod \"d8b680c5-f155-47d1-ae09-418ed642cb6e\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.438092 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-catalog-content\") pod \"d8b680c5-f155-47d1-ae09-418ed642cb6e\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.438179 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-utilities\") pod \"d8b680c5-f155-47d1-ae09-418ed642cb6e\" (UID: \"d8b680c5-f155-47d1-ae09-418ed642cb6e\") " Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.439496 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-utilities" (OuterVolumeSpecName: "utilities") pod "d8b680c5-f155-47d1-ae09-418ed642cb6e" (UID: "d8b680c5-f155-47d1-ae09-418ed642cb6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.444331 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b680c5-f155-47d1-ae09-418ed642cb6e-kube-api-access-qknpn" (OuterVolumeSpecName: "kube-api-access-qknpn") pod "d8b680c5-f155-47d1-ae09-418ed642cb6e" (UID: "d8b680c5-f155-47d1-ae09-418ed642cb6e"). InnerVolumeSpecName "kube-api-access-qknpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.541436 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qknpn\" (UniqueName: \"kubernetes.io/projected/d8b680c5-f155-47d1-ae09-418ed642cb6e-kube-api-access-qknpn\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.541489 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.557130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8b680c5-f155-47d1-ae09-418ed642cb6e" (UID: "d8b680c5-f155-47d1-ae09-418ed642cb6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.643179 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8b680c5-f155-47d1-ae09-418ed642cb6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.855261 4833 generic.go:334] "Generic (PLEG): container finished" podID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerID="8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454" exitCode=0 Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.855343 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerDied","Data":"8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454"} Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.855393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbzgb" event={"ID":"d8b680c5-f155-47d1-ae09-418ed642cb6e","Type":"ContainerDied","Data":"0e1876942d93b90800cb33c5f1d78bc1aedf984acf79bada6db3816e87e35d59"} Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.855431 4833 scope.go:117] "RemoveContainer" containerID="8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.856048 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbzgb" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.879774 4833 scope.go:117] "RemoveContainer" containerID="7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.906280 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbzgb"] Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.916891 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbzgb"] Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.922159 4833 scope.go:117] "RemoveContainer" containerID="d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.990632 4833 scope.go:117] "RemoveContainer" containerID="8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454" Jan 27 14:45:56 crc kubenswrapper[4833]: E0127 14:45:56.991480 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454\": container with ID starting with 8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454 not found: ID does not exist" containerID="8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.991520 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454"} err="failed to get container status \"8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454\": rpc error: code = NotFound desc = could not find container \"8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454\": container with ID starting with 8c8756dcdf5aa2b921b5c7609e16d9e3910f4c2f719abc88272b5f03570ab454 not found: ID does not exist" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.991543 4833 scope.go:117] "RemoveContainer" containerID="7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d" Jan 27 14:45:56 crc kubenswrapper[4833]: E0127 14:45:56.992060 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d\": container with ID starting with 7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d not found: ID does not exist" containerID="7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.992112 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d"} err="failed to get container status \"7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d\": rpc error: code = NotFound desc = could not find container \"7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d\": container with ID starting with 7d81863bfe9f88156c3f126053b6168318b0308aad0ae8d79bd3f469245f970d not found: ID does not exist" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.992146 4833 scope.go:117] "RemoveContainer" containerID="d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6" Jan 27 14:45:56 crc kubenswrapper[4833]: E0127 14:45:56.992805 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6\": container with ID starting with d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6 not found: ID does not exist" containerID="d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6" Jan 27 14:45:56 crc kubenswrapper[4833]: I0127 14:45:56.992879 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6"} err="failed to get container status \"d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6\": rpc error: code = NotFound desc = could not find container \"d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6\": container with ID starting with d19c15375fc6a00254e0d9a44f338420955ac8aff2bcedfbaa7dfb9ad5de18f6 not found: ID does not exist" Jan 27 14:45:57 crc kubenswrapper[4833]: I0127 14:45:57.226743 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" path="/var/lib/kubelet/pods/d8b680c5-f155-47d1-ae09-418ed642cb6e/volumes" Jan 27 14:46:02 crc kubenswrapper[4833]: I0127 14:46:02.260634 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:46:02 crc kubenswrapper[4833]: I0127 14:46:02.261052 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:46:07 crc kubenswrapper[4833]: I0127 14:46:07.968682 4833 generic.go:334] "Generic (PLEG): container finished" podID="6ebc1387-6b12-4592-86d6-92fe757cfd6b" containerID="c56f3478c547c4859da75e39df1e3ebe7ddfa0cdf927265a0c6f8a4d4d6af04f" exitCode=0 Jan 27 14:46:07 crc kubenswrapper[4833]: I0127 14:46:07.968784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" event={"ID":"6ebc1387-6b12-4592-86d6-92fe757cfd6b","Type":"ContainerDied","Data":"c56f3478c547c4859da75e39df1e3ebe7ddfa0cdf927265a0c6f8a4d4d6af04f"} Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.385487 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.423215 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jp7r\" (UniqueName: \"kubernetes.io/projected/6ebc1387-6b12-4592-86d6-92fe757cfd6b-kube-api-access-8jp7r\") pod \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.423813 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-ssh-key-openstack-edpm-ipam\") pod \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.423926 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-inventory\") pod \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\" (UID: \"6ebc1387-6b12-4592-86d6-92fe757cfd6b\") " Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.433785 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebc1387-6b12-4592-86d6-92fe757cfd6b-kube-api-access-8jp7r" (OuterVolumeSpecName: "kube-api-access-8jp7r") pod "6ebc1387-6b12-4592-86d6-92fe757cfd6b" (UID: "6ebc1387-6b12-4592-86d6-92fe757cfd6b"). InnerVolumeSpecName "kube-api-access-8jp7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.451643 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ebc1387-6b12-4592-86d6-92fe757cfd6b" (UID: "6ebc1387-6b12-4592-86d6-92fe757cfd6b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.459439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-inventory" (OuterVolumeSpecName: "inventory") pod "6ebc1387-6b12-4592-86d6-92fe757cfd6b" (UID: "6ebc1387-6b12-4592-86d6-92fe757cfd6b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.525640 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jp7r\" (UniqueName: \"kubernetes.io/projected/6ebc1387-6b12-4592-86d6-92fe757cfd6b-kube-api-access-8jp7r\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.525667 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:09 crc kubenswrapper[4833]: I0127 14:46:09.525678 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ebc1387-6b12-4592-86d6-92fe757cfd6b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.037398 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" event={"ID":"6ebc1387-6b12-4592-86d6-92fe757cfd6b","Type":"ContainerDied","Data":"7b76603126696ab80f3a35972b65d03da94fc7956efb3915b5f6271c13d0a6f3"} Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.037474 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b76603126696ab80f3a35972b65d03da94fc7956efb3915b5f6271c13d0a6f3" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.037564 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jswpw" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.092258 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ntzvk"] Jan 27 14:46:10 crc kubenswrapper[4833]: E0127 14:46:10.092906 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebc1387-6b12-4592-86d6-92fe757cfd6b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.092932 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebc1387-6b12-4592-86d6-92fe757cfd6b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:10 crc kubenswrapper[4833]: E0127 14:46:10.092953 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="extract-content" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.092961 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="extract-content" Jan 27 14:46:10 crc kubenswrapper[4833]: E0127 14:46:10.093005 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="registry-server" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.093013 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="registry-server" Jan 27 14:46:10 crc kubenswrapper[4833]: E0127 14:46:10.093028 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="extract-utilities" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.093039 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="extract-utilities" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.093262 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebc1387-6b12-4592-86d6-92fe757cfd6b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.093278 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b680c5-f155-47d1-ae09-418ed642cb6e" containerName="registry-server" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.094053 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.097807 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.099647 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.099719 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.104538 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.135172 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.135264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.135336 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lxs\" (UniqueName: \"kubernetes.io/projected/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-kube-api-access-h4lxs\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.144859 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ntzvk"] Jan 27 14:46:10 crc kubenswrapper[4833]: E0127 14:46:10.219909 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ebc1387_6b12_4592_86d6_92fe757cfd6b.slice/crio-7b76603126696ab80f3a35972b65d03da94fc7956efb3915b5f6271c13d0a6f3\": RecentStats: unable to find data in memory cache]" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.237724 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.237824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.237910 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4lxs\" (UniqueName: \"kubernetes.io/projected/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-kube-api-access-h4lxs\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.241870 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.242869 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.255885 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4lxs\" (UniqueName: \"kubernetes.io/projected/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-kube-api-access-h4lxs\") pod \"ssh-known-hosts-edpm-deployment-ntzvk\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.424183 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:10 crc kubenswrapper[4833]: I0127 14:46:10.998375 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ntzvk"] Jan 27 14:46:11 crc kubenswrapper[4833]: W0127 14:46:11.002253 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4cf7b42_9be9_4e04_853a_a7f0e40edfe3.slice/crio-96acaf87466a9b5c9970c1dd5e55b00e783358fb51158b4868de44ec6b7ac4dc WatchSource:0}: Error finding container 96acaf87466a9b5c9970c1dd5e55b00e783358fb51158b4868de44ec6b7ac4dc: Status 404 returned error can't find the container with id 96acaf87466a9b5c9970c1dd5e55b00e783358fb51158b4868de44ec6b7ac4dc Jan 27 14:46:11 crc kubenswrapper[4833]: I0127 14:46:11.047395 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" event={"ID":"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3","Type":"ContainerStarted","Data":"96acaf87466a9b5c9970c1dd5e55b00e783358fb51158b4868de44ec6b7ac4dc"} Jan 27 14:46:12 crc kubenswrapper[4833]: I0127 14:46:12.062015 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" event={"ID":"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3","Type":"ContainerStarted","Data":"03b3ce54851f03d884ed27536278e06da4bbb461779dda233b452519ec525f38"} Jan 27 14:46:12 crc kubenswrapper[4833]: I0127 14:46:12.093396 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" podStartSLOduration=1.58845551 podStartE2EDuration="2.093368253s" podCreationTimestamp="2026-01-27 14:46:10 +0000 UTC" firstStartedPulling="2026-01-27 14:46:11.004201179 +0000 UTC m=+2072.655525581" lastFinishedPulling="2026-01-27 14:46:11.509113902 +0000 UTC m=+2073.160438324" observedRunningTime="2026-01-27 14:46:12.084216037 +0000 UTC m=+2073.735540479" watchObservedRunningTime="2026-01-27 14:46:12.093368253 +0000 UTC m=+2073.744692695" Jan 27 14:46:19 crc kubenswrapper[4833]: I0127 14:46:19.137016 4833 generic.go:334] "Generic (PLEG): container finished" podID="a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" containerID="03b3ce54851f03d884ed27536278e06da4bbb461779dda233b452519ec525f38" exitCode=0 Jan 27 14:46:19 crc kubenswrapper[4833]: I0127 14:46:19.137090 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" event={"ID":"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3","Type":"ContainerDied","Data":"03b3ce54851f03d884ed27536278e06da4bbb461779dda233b452519ec525f38"} Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.570521 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.658506 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-ssh-key-openstack-edpm-ipam\") pod \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.658647 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4lxs\" (UniqueName: \"kubernetes.io/projected/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-kube-api-access-h4lxs\") pod \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.658694 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-inventory-0\") pod \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\" (UID: \"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3\") " Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.664624 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-kube-api-access-h4lxs" (OuterVolumeSpecName: "kube-api-access-h4lxs") pod "a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" (UID: "a4cf7b42-9be9-4e04-853a-a7f0e40edfe3"). InnerVolumeSpecName "kube-api-access-h4lxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.686345 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" (UID: "a4cf7b42-9be9-4e04-853a-a7f0e40edfe3"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.688176 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" (UID: "a4cf7b42-9be9-4e04-853a-a7f0e40edfe3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.760723 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4lxs\" (UniqueName: \"kubernetes.io/projected/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-kube-api-access-h4lxs\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.760771 4833 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:20 crc kubenswrapper[4833]: I0127 14:46:20.760783 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a4cf7b42-9be9-4e04-853a-a7f0e40edfe3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.163050 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" event={"ID":"a4cf7b42-9be9-4e04-853a-a7f0e40edfe3","Type":"ContainerDied","Data":"96acaf87466a9b5c9970c1dd5e55b00e783358fb51158b4868de44ec6b7ac4dc"} Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.163133 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96acaf87466a9b5c9970c1dd5e55b00e783358fb51158b4868de44ec6b7ac4dc" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.163096 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ntzvk" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.270229 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g"] Jan 27 14:46:21 crc kubenswrapper[4833]: E0127 14:46:21.270792 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.270814 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.271054 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4cf7b42-9be9-4e04-853a-a7f0e40edfe3" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.271828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.274012 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.274290 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.274325 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.274640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.291305 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g"] Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.371169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqfc\" (UniqueName: \"kubernetes.io/projected/5eea495f-1339-43a0-9ec7-b50211d609d2-kube-api-access-tgqfc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.371234 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.371254 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.473695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.473776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.474076 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgqfc\" (UniqueName: \"kubernetes.io/projected/5eea495f-1339-43a0-9ec7-b50211d609d2-kube-api-access-tgqfc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.479737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.485492 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.509039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgqfc\" (UniqueName: \"kubernetes.io/projected/5eea495f-1339-43a0-9ec7-b50211d609d2-kube-api-access-tgqfc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xcw7g\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:21 crc kubenswrapper[4833]: I0127 14:46:21.595167 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:22 crc kubenswrapper[4833]: I0127 14:46:22.120026 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g"] Jan 27 14:46:22 crc kubenswrapper[4833]: I0127 14:46:22.124563 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:46:22 crc kubenswrapper[4833]: I0127 14:46:22.172253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" event={"ID":"5eea495f-1339-43a0-9ec7-b50211d609d2","Type":"ContainerStarted","Data":"ffac6993b3b7622d6053d366f1381f2b7c501cae8ba44ffc9f2eae10335b42b9"} Jan 27 14:46:23 crc kubenswrapper[4833]: I0127 14:46:23.187946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" event={"ID":"5eea495f-1339-43a0-9ec7-b50211d609d2","Type":"ContainerStarted","Data":"e95a078e0f0b8180bedf9c5bbccea0a7fedc812db05338eb0d1debcfcb054a61"} Jan 27 14:46:23 crc kubenswrapper[4833]: I0127 14:46:23.219947 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" podStartSLOduration=1.780016942 podStartE2EDuration="2.219924478s" podCreationTimestamp="2026-01-27 14:46:21 +0000 UTC" firstStartedPulling="2026-01-27 14:46:22.124297241 +0000 UTC m=+2083.775621633" lastFinishedPulling="2026-01-27 14:46:22.564204757 +0000 UTC m=+2084.215529169" observedRunningTime="2026-01-27 14:46:23.208023947 +0000 UTC m=+2084.859348349" watchObservedRunningTime="2026-01-27 14:46:23.219924478 +0000 UTC m=+2084.871248880" Jan 27 14:46:30 crc kubenswrapper[4833]: E0127 14:46:30.738374 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eea495f_1339_43a0_9ec7_b50211d609d2.slice/crio-e95a078e0f0b8180bedf9c5bbccea0a7fedc812db05338eb0d1debcfcb054a61.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:46:31 crc kubenswrapper[4833]: I0127 14:46:31.278850 4833 generic.go:334] "Generic (PLEG): container finished" podID="5eea495f-1339-43a0-9ec7-b50211d609d2" containerID="e95a078e0f0b8180bedf9c5bbccea0a7fedc812db05338eb0d1debcfcb054a61" exitCode=0 Jan 27 14:46:31 crc kubenswrapper[4833]: I0127 14:46:31.278891 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" event={"ID":"5eea495f-1339-43a0-9ec7-b50211d609d2","Type":"ContainerDied","Data":"e95a078e0f0b8180bedf9c5bbccea0a7fedc812db05338eb0d1debcfcb054a61"} Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.261113 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.261623 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.261740 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.262938 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b93fa008204f075160dd00a1ffe7bef2d616079e42f06d1e6e87a728b29b2ba0"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.263048 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://b93fa008204f075160dd00a1ffe7bef2d616079e42f06d1e6e87a728b29b2ba0" gracePeriod=600 Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.753465 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.850669 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-ssh-key-openstack-edpm-ipam\") pod \"5eea495f-1339-43a0-9ec7-b50211d609d2\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.851025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-inventory\") pod \"5eea495f-1339-43a0-9ec7-b50211d609d2\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.851081 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgqfc\" (UniqueName: \"kubernetes.io/projected/5eea495f-1339-43a0-9ec7-b50211d609d2-kube-api-access-tgqfc\") pod \"5eea495f-1339-43a0-9ec7-b50211d609d2\" (UID: \"5eea495f-1339-43a0-9ec7-b50211d609d2\") " Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.857617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eea495f-1339-43a0-9ec7-b50211d609d2-kube-api-access-tgqfc" (OuterVolumeSpecName: "kube-api-access-tgqfc") pod "5eea495f-1339-43a0-9ec7-b50211d609d2" (UID: "5eea495f-1339-43a0-9ec7-b50211d609d2"). InnerVolumeSpecName "kube-api-access-tgqfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.881330 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-inventory" (OuterVolumeSpecName: "inventory") pod "5eea495f-1339-43a0-9ec7-b50211d609d2" (UID: "5eea495f-1339-43a0-9ec7-b50211d609d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.884692 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5eea495f-1339-43a0-9ec7-b50211d609d2" (UID: "5eea495f-1339-43a0-9ec7-b50211d609d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.954138 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.954528 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgqfc\" (UniqueName: \"kubernetes.io/projected/5eea495f-1339-43a0-9ec7-b50211d609d2-kube-api-access-tgqfc\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:32 crc kubenswrapper[4833]: I0127 14:46:32.954550 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5eea495f-1339-43a0-9ec7-b50211d609d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.306033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" event={"ID":"5eea495f-1339-43a0-9ec7-b50211d609d2","Type":"ContainerDied","Data":"ffac6993b3b7622d6053d366f1381f2b7c501cae8ba44ffc9f2eae10335b42b9"} Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.306074 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffac6993b3b7622d6053d366f1381f2b7c501cae8ba44ffc9f2eae10335b42b9" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.306115 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xcw7g" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.310588 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="b93fa008204f075160dd00a1ffe7bef2d616079e42f06d1e6e87a728b29b2ba0" exitCode=0 Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.310630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"b93fa008204f075160dd00a1ffe7bef2d616079e42f06d1e6e87a728b29b2ba0"} Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.310656 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1"} Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.310671 4833 scope.go:117] "RemoveContainer" containerID="f87ed4a80bdf1b8fc2bb74291c5da33f5ec5b53bd41cb992522b7d8788bed9ce" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.501592 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f"] Jan 27 14:46:33 crc kubenswrapper[4833]: E0127 14:46:33.502387 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eea495f-1339-43a0-9ec7-b50211d609d2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.502412 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eea495f-1339-43a0-9ec7-b50211d609d2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.502647 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eea495f-1339-43a0-9ec7-b50211d609d2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.503690 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.509904 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.509958 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.510122 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.510190 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.522737 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f"] Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.566551 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrkrj\" (UniqueName: \"kubernetes.io/projected/0209691f-9aa1-4c9e-abeb-682686b65cb5-kube-api-access-jrkrj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.566864 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.567009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.667936 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.668004 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.668118 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrkrj\" (UniqueName: \"kubernetes.io/projected/0209691f-9aa1-4c9e-abeb-682686b65cb5-kube-api-access-jrkrj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.673643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.682626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.687164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrkrj\" (UniqueName: \"kubernetes.io/projected/0209691f-9aa1-4c9e-abeb-682686b65cb5-kube-api-access-jrkrj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:33 crc kubenswrapper[4833]: I0127 14:46:33.834327 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:34 crc kubenswrapper[4833]: I0127 14:46:34.378369 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f"] Jan 27 14:46:34 crc kubenswrapper[4833]: W0127 14:46:34.388425 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0209691f_9aa1_4c9e_abeb_682686b65cb5.slice/crio-69a0cc2d792b0a6334152073ebc044bca9aa7605a2a02fc034533e43418dc945 WatchSource:0}: Error finding container 69a0cc2d792b0a6334152073ebc044bca9aa7605a2a02fc034533e43418dc945: Status 404 returned error can't find the container with id 69a0cc2d792b0a6334152073ebc044bca9aa7605a2a02fc034533e43418dc945 Jan 27 14:46:35 crc kubenswrapper[4833]: I0127 14:46:35.341358 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" event={"ID":"0209691f-9aa1-4c9e-abeb-682686b65cb5","Type":"ContainerStarted","Data":"00ed3cdbe1c67dd64cc80394a802c9e5985193d1622a3411724ea7a23605c37a"} Jan 27 14:46:35 crc kubenswrapper[4833]: I0127 14:46:35.341706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" event={"ID":"0209691f-9aa1-4c9e-abeb-682686b65cb5","Type":"ContainerStarted","Data":"69a0cc2d792b0a6334152073ebc044bca9aa7605a2a02fc034533e43418dc945"} Jan 27 14:46:44 crc kubenswrapper[4833]: I0127 14:46:44.439648 4833 generic.go:334] "Generic (PLEG): container finished" podID="0209691f-9aa1-4c9e-abeb-682686b65cb5" containerID="00ed3cdbe1c67dd64cc80394a802c9e5985193d1622a3411724ea7a23605c37a" exitCode=0 Jan 27 14:46:44 crc kubenswrapper[4833]: I0127 14:46:44.439777 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" event={"ID":"0209691f-9aa1-4c9e-abeb-682686b65cb5","Type":"ContainerDied","Data":"00ed3cdbe1c67dd64cc80394a802c9e5985193d1622a3411724ea7a23605c37a"} Jan 27 14:46:45 crc kubenswrapper[4833]: I0127 14:46:45.995295 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.131612 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-inventory\") pod \"0209691f-9aa1-4c9e-abeb-682686b65cb5\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.131974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-ssh-key-openstack-edpm-ipam\") pod \"0209691f-9aa1-4c9e-abeb-682686b65cb5\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.132011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrkrj\" (UniqueName: \"kubernetes.io/projected/0209691f-9aa1-4c9e-abeb-682686b65cb5-kube-api-access-jrkrj\") pod \"0209691f-9aa1-4c9e-abeb-682686b65cb5\" (UID: \"0209691f-9aa1-4c9e-abeb-682686b65cb5\") " Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.138312 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0209691f-9aa1-4c9e-abeb-682686b65cb5-kube-api-access-jrkrj" (OuterVolumeSpecName: "kube-api-access-jrkrj") pod "0209691f-9aa1-4c9e-abeb-682686b65cb5" (UID: "0209691f-9aa1-4c9e-abeb-682686b65cb5"). InnerVolumeSpecName "kube-api-access-jrkrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.161700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0209691f-9aa1-4c9e-abeb-682686b65cb5" (UID: "0209691f-9aa1-4c9e-abeb-682686b65cb5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.162710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-inventory" (OuterVolumeSpecName: "inventory") pod "0209691f-9aa1-4c9e-abeb-682686b65cb5" (UID: "0209691f-9aa1-4c9e-abeb-682686b65cb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.234347 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.234378 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrkrj\" (UniqueName: \"kubernetes.io/projected/0209691f-9aa1-4c9e-abeb-682686b65cb5-kube-api-access-jrkrj\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.234395 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0209691f-9aa1-4c9e-abeb-682686b65cb5-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.459117 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" event={"ID":"0209691f-9aa1-4c9e-abeb-682686b65cb5","Type":"ContainerDied","Data":"69a0cc2d792b0a6334152073ebc044bca9aa7605a2a02fc034533e43418dc945"} Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.459159 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a0cc2d792b0a6334152073ebc044bca9aa7605a2a02fc034533e43418dc945" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.459159 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.554379 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq"] Jan 27 14:46:46 crc kubenswrapper[4833]: E0127 14:46:46.554848 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0209691f-9aa1-4c9e-abeb-682686b65cb5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.554874 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0209691f-9aa1-4c9e-abeb-682686b65cb5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.555132 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0209691f-9aa1-4c9e-abeb-682686b65cb5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.555941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.562394 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.562889 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.563175 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.563609 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.563856 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.563881 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.564012 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.569000 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.574272 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq"] Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743231 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743330 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dzc\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-kube-api-access-s8dzc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743354 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743396 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743472 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743507 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.743688 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862061 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862146 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862194 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8dzc\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-kube-api-access-s8dzc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.862514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.864356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.866962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.867355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.868262 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.868737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.869055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.871252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.871367 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.871688 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.871896 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.872209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.874268 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.878870 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.880360 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8dzc\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-kube-api-access-s8dzc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:46 crc kubenswrapper[4833]: I0127 14:46:46.885044 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:47 crc kubenswrapper[4833]: I0127 14:46:47.173825 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:46:47 crc kubenswrapper[4833]: I0127 14:46:47.741194 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq"] Jan 27 14:46:48 crc kubenswrapper[4833]: I0127 14:46:48.481962 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" event={"ID":"90180c07-3b18-4ce7-ae5b-c3288c171195","Type":"ContainerStarted","Data":"4dd17c3250b92042bef7402a7bd4c73283b949c9bcddff270557e49b1a908ad6"} Jan 27 14:46:48 crc kubenswrapper[4833]: I0127 14:46:48.482307 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" event={"ID":"90180c07-3b18-4ce7-ae5b-c3288c171195","Type":"ContainerStarted","Data":"0548907a136a84b7a23b12dadb25e34cba14b3f108de440475658f37881b5724"} Jan 27 14:46:48 crc kubenswrapper[4833]: I0127 14:46:48.507242 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" podStartSLOduration=2.074016416 podStartE2EDuration="2.507219935s" podCreationTimestamp="2026-01-27 14:46:46 +0000 UTC" firstStartedPulling="2026-01-27 14:46:47.747789479 +0000 UTC m=+2109.399113931" lastFinishedPulling="2026-01-27 14:46:48.180993048 +0000 UTC m=+2109.832317450" observedRunningTime="2026-01-27 14:46:48.498865277 +0000 UTC m=+2110.150189679" watchObservedRunningTime="2026-01-27 14:46:48.507219935 +0000 UTC m=+2110.158544337" Jan 27 14:47:25 crc kubenswrapper[4833]: I0127 14:47:25.945185 4833 generic.go:334] "Generic (PLEG): container finished" podID="90180c07-3b18-4ce7-ae5b-c3288c171195" containerID="4dd17c3250b92042bef7402a7bd4c73283b949c9bcddff270557e49b1a908ad6" exitCode=0 Jan 27 14:47:25 crc kubenswrapper[4833]: I0127 14:47:25.945273 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" event={"ID":"90180c07-3b18-4ce7-ae5b-c3288c171195","Type":"ContainerDied","Data":"4dd17c3250b92042bef7402a7bd4c73283b949c9bcddff270557e49b1a908ad6"} Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.500783 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.600840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-neutron-metadata-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.600965 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-bootstrap-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601018 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-ovn-default-certs-0\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601076 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ovn-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601139 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-telemetry-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601169 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601260 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-nova-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601283 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ssh-key-openstack-edpm-ipam\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-inventory\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601329 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-repo-setup-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601344 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-libvirt-combined-ca-bundle\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601374 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.601393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8dzc\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-kube-api-access-s8dzc\") pod \"90180c07-3b18-4ce7-ae5b-c3288c171195\" (UID: \"90180c07-3b18-4ce7-ae5b-c3288c171195\") " Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.608615 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.609422 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.609522 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.609607 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.609691 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.610625 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.611135 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-kube-api-access-s8dzc" (OuterVolumeSpecName: "kube-api-access-s8dzc") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "kube-api-access-s8dzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.611752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.612248 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.612351 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.614715 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.617726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.640396 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-inventory" (OuterVolumeSpecName: "inventory") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.641890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90180c07-3b18-4ce7-ae5b-c3288c171195" (UID: "90180c07-3b18-4ce7-ae5b-c3288c171195"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703641 4833 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703675 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703687 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703699 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703708 4833 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703718 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703730 4833 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703739 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703748 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703755 4833 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703764 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703773 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703782 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8dzc\" (UniqueName: \"kubernetes.io/projected/90180c07-3b18-4ce7-ae5b-c3288c171195-kube-api-access-s8dzc\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.703791 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90180c07-3b18-4ce7-ae5b-c3288c171195-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.966582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" event={"ID":"90180c07-3b18-4ce7-ae5b-c3288c171195","Type":"ContainerDied","Data":"0548907a136a84b7a23b12dadb25e34cba14b3f108de440475658f37881b5724"} Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.966653 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq" Jan 27 14:47:27 crc kubenswrapper[4833]: I0127 14:47:27.966665 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0548907a136a84b7a23b12dadb25e34cba14b3f108de440475658f37881b5724" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.183763 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9"] Jan 27 14:47:28 crc kubenswrapper[4833]: E0127 14:47:28.185030 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90180c07-3b18-4ce7-ae5b-c3288c171195" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.185289 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="90180c07-3b18-4ce7-ae5b-c3288c171195" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.185975 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="90180c07-3b18-4ce7-ae5b-c3288c171195" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.187693 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.196569 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9"] Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.241834 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.242212 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.242618 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.243155 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.243371 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.344812 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.344892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.345511 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.345625 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5mr\" (UniqueName: \"kubernetes.io/projected/0d6b0751-b1bd-44ed-b6df-942f63c8b191-kube-api-access-5z5mr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.345777 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.447923 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.448055 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z5mr\" (UniqueName: \"kubernetes.io/projected/0d6b0751-b1bd-44ed-b6df-942f63c8b191-kube-api-access-5z5mr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.448099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.449091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.449179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.449590 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.452654 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.457340 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.459785 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.470290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z5mr\" (UniqueName: \"kubernetes.io/projected/0d6b0751-b1bd-44ed-b6df-942f63c8b191-kube-api-access-5z5mr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ng8m9\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:28 crc kubenswrapper[4833]: I0127 14:47:28.569683 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.087961 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9"] Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.640500 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfgkg"] Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.646753 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.653671 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfgkg"] Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.673287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-catalog-content\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.673506 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jr8s\" (UniqueName: \"kubernetes.io/projected/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-kube-api-access-2jr8s\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.673573 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-utilities\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.776086 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jr8s\" (UniqueName: \"kubernetes.io/projected/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-kube-api-access-2jr8s\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.776143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-utilities\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.776204 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-catalog-content\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.776643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-catalog-content\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.777173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-utilities\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.792876 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jr8s\" (UniqueName: \"kubernetes.io/projected/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-kube-api-access-2jr8s\") pod \"certified-operators-jfgkg\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.985127 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" event={"ID":"0d6b0751-b1bd-44ed-b6df-942f63c8b191","Type":"ContainerStarted","Data":"e87907259c2ecdf203e1728886ef84d2db221721be90b20aa8d9d06c17612766"} Jan 27 14:47:29 crc kubenswrapper[4833]: I0127 14:47:29.985182 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" event={"ID":"0d6b0751-b1bd-44ed-b6df-942f63c8b191","Type":"ContainerStarted","Data":"5205c1c925534e5dfebc631fda3b1c791a430b2f0190d1dbd530aea8a0a11229"} Jan 27 14:47:30 crc kubenswrapper[4833]: I0127 14:47:30.008681 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" podStartSLOduration=1.536229925 podStartE2EDuration="2.008651285s" podCreationTimestamp="2026-01-27 14:47:28 +0000 UTC" firstStartedPulling="2026-01-27 14:47:29.089980374 +0000 UTC m=+2150.741304776" lastFinishedPulling="2026-01-27 14:47:29.562401734 +0000 UTC m=+2151.213726136" observedRunningTime="2026-01-27 14:47:30.000406345 +0000 UTC m=+2151.651730757" watchObservedRunningTime="2026-01-27 14:47:30.008651285 +0000 UTC m=+2151.659975687" Jan 27 14:47:30 crc kubenswrapper[4833]: I0127 14:47:30.072192 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:30 crc kubenswrapper[4833]: I0127 14:47:30.652745 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfgkg"] Jan 27 14:47:30 crc kubenswrapper[4833]: W0127 14:47:30.660539 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4bec2cb_c9ba_40dc_a4cb_5140b6ce1d0b.slice/crio-79908ab458fafa80c0dc4e70a22c67d8cdda7bd0bad05fd1c497692f712ca772 WatchSource:0}: Error finding container 79908ab458fafa80c0dc4e70a22c67d8cdda7bd0bad05fd1c497692f712ca772: Status 404 returned error can't find the container with id 79908ab458fafa80c0dc4e70a22c67d8cdda7bd0bad05fd1c497692f712ca772 Jan 27 14:47:30 crc kubenswrapper[4833]: I0127 14:47:30.995262 4833 generic.go:334] "Generic (PLEG): container finished" podID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerID="6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c" exitCode=0 Jan 27 14:47:30 crc kubenswrapper[4833]: I0127 14:47:30.995336 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerDied","Data":"6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c"} Jan 27 14:47:30 crc kubenswrapper[4833]: I0127 14:47:30.995617 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerStarted","Data":"79908ab458fafa80c0dc4e70a22c67d8cdda7bd0bad05fd1c497692f712ca772"} Jan 27 14:47:32 crc kubenswrapper[4833]: I0127 14:47:32.005820 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerStarted","Data":"ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2"} Jan 27 14:47:32 crc kubenswrapper[4833]: E0127 14:47:32.355389 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4bec2cb_c9ba_40dc_a4cb_5140b6ce1d0b.slice/crio-conmon-ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:47:33 crc kubenswrapper[4833]: I0127 14:47:33.016814 4833 generic.go:334] "Generic (PLEG): container finished" podID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerID="ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2" exitCode=0 Jan 27 14:47:33 crc kubenswrapper[4833]: I0127 14:47:33.016876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerDied","Data":"ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2"} Jan 27 14:47:34 crc kubenswrapper[4833]: I0127 14:47:34.026639 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerStarted","Data":"9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70"} Jan 27 14:47:34 crc kubenswrapper[4833]: I0127 14:47:34.053912 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfgkg" podStartSLOduration=2.6189727339999997 podStartE2EDuration="5.053892402s" podCreationTimestamp="2026-01-27 14:47:29 +0000 UTC" firstStartedPulling="2026-01-27 14:47:30.997603591 +0000 UTC m=+2152.648927993" lastFinishedPulling="2026-01-27 14:47:33.432523259 +0000 UTC m=+2155.083847661" observedRunningTime="2026-01-27 14:47:34.044859014 +0000 UTC m=+2155.696183416" watchObservedRunningTime="2026-01-27 14:47:34.053892402 +0000 UTC m=+2155.705216804" Jan 27 14:47:40 crc kubenswrapper[4833]: I0127 14:47:40.072381 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:40 crc kubenswrapper[4833]: I0127 14:47:40.072916 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:40 crc kubenswrapper[4833]: I0127 14:47:40.120503 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:40 crc kubenswrapper[4833]: I0127 14:47:40.174553 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:40 crc kubenswrapper[4833]: I0127 14:47:40.372357 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfgkg"] Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.102826 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jfgkg" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="registry-server" containerID="cri-o://9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70" gracePeriod=2 Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.612185 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.753382 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jr8s\" (UniqueName: \"kubernetes.io/projected/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-kube-api-access-2jr8s\") pod \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.753464 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-catalog-content\") pod \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.753488 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-utilities\") pod \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\" (UID: \"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b\") " Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.755422 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-utilities" (OuterVolumeSpecName: "utilities") pod "d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" (UID: "d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.768411 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-kube-api-access-2jr8s" (OuterVolumeSpecName: "kube-api-access-2jr8s") pod "d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" (UID: "d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b"). InnerVolumeSpecName "kube-api-access-2jr8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.856917 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jr8s\" (UniqueName: \"kubernetes.io/projected/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-kube-api-access-2jr8s\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:42 crc kubenswrapper[4833]: I0127 14:47:42.856951 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.115580 4833 generic.go:334] "Generic (PLEG): container finished" podID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerID="9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70" exitCode=0 Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.115775 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerDied","Data":"9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70"} Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.115912 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgkg" event={"ID":"d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b","Type":"ContainerDied","Data":"79908ab458fafa80c0dc4e70a22c67d8cdda7bd0bad05fd1c497692f712ca772"} Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.115938 4833 scope.go:117] "RemoveContainer" containerID="9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.115971 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgkg" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.137769 4833 scope.go:117] "RemoveContainer" containerID="ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.155927 4833 scope.go:117] "RemoveContainer" containerID="6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.203408 4833 scope.go:117] "RemoveContainer" containerID="9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70" Jan 27 14:47:43 crc kubenswrapper[4833]: E0127 14:47:43.203998 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70\": container with ID starting with 9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70 not found: ID does not exist" containerID="9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.204083 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70"} err="failed to get container status \"9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70\": rpc error: code = NotFound desc = could not find container \"9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70\": container with ID starting with 9dc6084015c5d5e6cacac04760e6dd71ca7c86aaf8dc74299b14cd93558d2b70 not found: ID does not exist" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.204114 4833 scope.go:117] "RemoveContainer" containerID="ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2" Jan 27 14:47:43 crc kubenswrapper[4833]: E0127 14:47:43.204498 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2\": container with ID starting with ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2 not found: ID does not exist" containerID="ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.204534 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2"} err="failed to get container status \"ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2\": rpc error: code = NotFound desc = could not find container \"ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2\": container with ID starting with ef07505c411b22cd21c314e4e971e0f18bff6c22465f327f1312d4fd8b13bca2 not found: ID does not exist" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.204557 4833 scope.go:117] "RemoveContainer" containerID="6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c" Jan 27 14:47:43 crc kubenswrapper[4833]: E0127 14:47:43.204974 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c\": container with ID starting with 6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c not found: ID does not exist" containerID="6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.205055 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c"} err="failed to get container status \"6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c\": rpc error: code = NotFound desc = could not find container \"6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c\": container with ID starting with 6b7181651c5ab22cfc32780dc60804ea91c00924267e3ddb3e1f6f5798055e0c not found: ID does not exist" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.772093 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" (UID: "d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:47:43 crc kubenswrapper[4833]: I0127 14:47:43.777841 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:44 crc kubenswrapper[4833]: I0127 14:47:44.064776 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfgkg"] Jan 27 14:47:44 crc kubenswrapper[4833]: I0127 14:47:44.072130 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jfgkg"] Jan 27 14:47:45 crc kubenswrapper[4833]: I0127 14:47:45.227066 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" path="/var/lib/kubelet/pods/d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b/volumes" Jan 27 14:48:32 crc kubenswrapper[4833]: I0127 14:48:32.260839 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:48:32 crc kubenswrapper[4833]: I0127 14:48:32.261485 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:48:36 crc kubenswrapper[4833]: I0127 14:48:36.614997 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" event={"ID":"0d6b0751-b1bd-44ed-b6df-942f63c8b191","Type":"ContainerDied","Data":"e87907259c2ecdf203e1728886ef84d2db221721be90b20aa8d9d06c17612766"} Jan 27 14:48:36 crc kubenswrapper[4833]: I0127 14:48:36.615566 4833 generic.go:334] "Generic (PLEG): container finished" podID="0d6b0751-b1bd-44ed-b6df-942f63c8b191" containerID="e87907259c2ecdf203e1728886ef84d2db221721be90b20aa8d9d06c17612766" exitCode=0 Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.056427 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.111702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-inventory\") pod \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.111847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovn-combined-ca-bundle\") pod \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.111950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam\") pod \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.111983 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovncontroller-config-0\") pod \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.112026 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z5mr\" (UniqueName: \"kubernetes.io/projected/0d6b0751-b1bd-44ed-b6df-942f63c8b191-kube-api-access-5z5mr\") pod \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.117648 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0d6b0751-b1bd-44ed-b6df-942f63c8b191" (UID: "0d6b0751-b1bd-44ed-b6df-942f63c8b191"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.118384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6b0751-b1bd-44ed-b6df-942f63c8b191-kube-api-access-5z5mr" (OuterVolumeSpecName: "kube-api-access-5z5mr") pod "0d6b0751-b1bd-44ed-b6df-942f63c8b191" (UID: "0d6b0751-b1bd-44ed-b6df-942f63c8b191"). InnerVolumeSpecName "kube-api-access-5z5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.134896 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0d6b0751-b1bd-44ed-b6df-942f63c8b191" (UID: "0d6b0751-b1bd-44ed-b6df-942f63c8b191"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:48:38 crc kubenswrapper[4833]: E0127 14:48:38.144882 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam podName:0d6b0751-b1bd-44ed-b6df-942f63c8b191 nodeName:}" failed. No retries permitted until 2026-01-27 14:48:38.644854973 +0000 UTC m=+2220.296179375 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam") pod "0d6b0751-b1bd-44ed-b6df-942f63c8b191" (UID: "0d6b0751-b1bd-44ed-b6df-942f63c8b191") : error deleting /var/lib/kubelet/pods/0d6b0751-b1bd-44ed-b6df-942f63c8b191/volume-subpaths: remove /var/lib/kubelet/pods/0d6b0751-b1bd-44ed-b6df-942f63c8b191/volume-subpaths: no such file or directory Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.147916 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-inventory" (OuterVolumeSpecName: "inventory") pod "0d6b0751-b1bd-44ed-b6df-942f63c8b191" (UID: "0d6b0751-b1bd-44ed-b6df-942f63c8b191"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.214698 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.214726 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.214738 4833 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.214746 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z5mr\" (UniqueName: \"kubernetes.io/projected/0d6b0751-b1bd-44ed-b6df-942f63c8b191-kube-api-access-5z5mr\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.650759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" event={"ID":"0d6b0751-b1bd-44ed-b6df-942f63c8b191","Type":"ContainerDied","Data":"5205c1c925534e5dfebc631fda3b1c791a430b2f0190d1dbd530aea8a0a11229"} Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.650800 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5205c1c925534e5dfebc631fda3b1c791a430b2f0190d1dbd530aea8a0a11229" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.650827 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ng8m9" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.726592 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam\") pod \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\" (UID: \"0d6b0751-b1bd-44ed-b6df-942f63c8b191\") " Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.740709 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d6b0751-b1bd-44ed-b6df-942f63c8b191" (UID: "0d6b0751-b1bd-44ed-b6df-942f63c8b191"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.765382 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm"] Jan 27 14:48:38 crc kubenswrapper[4833]: E0127 14:48:38.765986 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="extract-utilities" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.766010 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="extract-utilities" Jan 27 14:48:38 crc kubenswrapper[4833]: E0127 14:48:38.766044 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6b0751-b1bd-44ed-b6df-942f63c8b191" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.766053 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6b0751-b1bd-44ed-b6df-942f63c8b191" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:48:38 crc kubenswrapper[4833]: E0127 14:48:38.766073 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="extract-content" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.766081 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="extract-content" Jan 27 14:48:38 crc kubenswrapper[4833]: E0127 14:48:38.766101 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="registry-server" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.766109 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="registry-server" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.766364 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6b0751-b1bd-44ed-b6df-942f63c8b191" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.766395 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bec2cb-c9ba-40dc-a4cb-5140b6ce1d0b" containerName="registry-server" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.767114 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.770071 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.770647 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.808976 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm"] Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.830406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p5pp\" (UniqueName: \"kubernetes.io/projected/01ef9ac5-6b63-4441-ab5e-d700019bbe30-kube-api-access-9p5pp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.830501 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.830570 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.830809 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.830877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.830911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.831351 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d6b0751-b1bd-44ed-b6df-942f63c8b191-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.932521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.932840 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.932987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.933237 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p5pp\" (UniqueName: \"kubernetes.io/projected/01ef9ac5-6b63-4441-ab5e-d700019bbe30-kube-api-access-9p5pp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.933362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.933579 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.936791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.936807 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.938117 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.938734 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.938917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:38 crc kubenswrapper[4833]: I0127 14:48:38.949460 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p5pp\" (UniqueName: \"kubernetes.io/projected/01ef9ac5-6b63-4441-ab5e-d700019bbe30-kube-api-access-9p5pp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:39 crc kubenswrapper[4833]: I0127 14:48:39.143196 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:48:39 crc kubenswrapper[4833]: I0127 14:48:39.676474 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm"] Jan 27 14:48:40 crc kubenswrapper[4833]: I0127 14:48:40.669082 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" event={"ID":"01ef9ac5-6b63-4441-ab5e-d700019bbe30","Type":"ContainerStarted","Data":"265074e3c014775157ab38066e59439ee92198e38505984f1a7393d5941705d6"} Jan 27 14:48:40 crc kubenswrapper[4833]: I0127 14:48:40.669380 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" event={"ID":"01ef9ac5-6b63-4441-ab5e-d700019bbe30","Type":"ContainerStarted","Data":"809c0dce9bf1c302137e4089f1dcb6c2aacf6966e4b0b5b94fd4997fdf89a5d7"} Jan 27 14:49:02 crc kubenswrapper[4833]: I0127 14:49:02.261130 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:49:02 crc kubenswrapper[4833]: I0127 14:49:02.261695 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:49:09 crc kubenswrapper[4833]: I0127 14:49:09.912854 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" podStartSLOduration=31.347207365 podStartE2EDuration="31.912827137s" podCreationTimestamp="2026-01-27 14:48:38 +0000 UTC" firstStartedPulling="2026-01-27 14:48:39.676297229 +0000 UTC m=+2221.327621631" lastFinishedPulling="2026-01-27 14:48:40.241916981 +0000 UTC m=+2221.893241403" observedRunningTime="2026-01-27 14:48:40.685645504 +0000 UTC m=+2222.336969906" watchObservedRunningTime="2026-01-27 14:49:09.912827137 +0000 UTC m=+2251.564151539" Jan 27 14:49:09 crc kubenswrapper[4833]: I0127 14:49:09.916934 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cx57v"] Jan 27 14:49:09 crc kubenswrapper[4833]: I0127 14:49:09.920219 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:09 crc kubenswrapper[4833]: I0127 14:49:09.926746 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx57v"] Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.007850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2zrx\" (UniqueName: \"kubernetes.io/projected/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-kube-api-access-j2zrx\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.007898 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-utilities\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.008063 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-catalog-content\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.109768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-catalog-content\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.110267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2zrx\" (UniqueName: \"kubernetes.io/projected/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-kube-api-access-j2zrx\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.110397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-catalog-content\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.110674 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-utilities\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.110895 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-utilities\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.131453 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2zrx\" (UniqueName: \"kubernetes.io/projected/ace8eea4-3108-4f42-ab6c-1b8a2c6b2980-kube-api-access-j2zrx\") pod \"community-operators-cx57v\" (UID: \"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980\") " pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.240281 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.841860 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx57v"] Jan 27 14:49:10 crc kubenswrapper[4833]: I0127 14:49:10.987329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx57v" event={"ID":"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980","Type":"ContainerStarted","Data":"9505d4c305a421f23829ba77fdb916c0520c6d4c00d88802927f82a4f803c043"} Jan 27 14:49:11 crc kubenswrapper[4833]: I0127 14:49:11.998979 4833 generic.go:334] "Generic (PLEG): container finished" podID="ace8eea4-3108-4f42-ab6c-1b8a2c6b2980" containerID="a0aa5dcaac2309e027925ac1dcc38565da16b3bd0abc60aa5c0a24b646299ee9" exitCode=0 Jan 27 14:49:12 crc kubenswrapper[4833]: I0127 14:49:11.999233 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx57v" event={"ID":"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980","Type":"ContainerDied","Data":"a0aa5dcaac2309e027925ac1dcc38565da16b3bd0abc60aa5c0a24b646299ee9"} Jan 27 14:49:17 crc kubenswrapper[4833]: I0127 14:49:17.048821 4833 generic.go:334] "Generic (PLEG): container finished" podID="ace8eea4-3108-4f42-ab6c-1b8a2c6b2980" containerID="0ba1e7533d97f7e4a9a65b2949b4dd6c4652108f4d5636f21f6b4a7be57ff9a0" exitCode=0 Jan 27 14:49:17 crc kubenswrapper[4833]: I0127 14:49:17.048923 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx57v" event={"ID":"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980","Type":"ContainerDied","Data":"0ba1e7533d97f7e4a9a65b2949b4dd6c4652108f4d5636f21f6b4a7be57ff9a0"} Jan 27 14:49:18 crc kubenswrapper[4833]: I0127 14:49:18.061957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cx57v" event={"ID":"ace8eea4-3108-4f42-ab6c-1b8a2c6b2980","Type":"ContainerStarted","Data":"5ae769838abcd0e2c67fb0c73f472a80efba470b9125c8caf190eb1aab44f112"} Jan 27 14:49:18 crc kubenswrapper[4833]: I0127 14:49:18.093381 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cx57v" podStartSLOduration=3.617478211 podStartE2EDuration="9.093360252s" podCreationTimestamp="2026-01-27 14:49:09 +0000 UTC" firstStartedPulling="2026-01-27 14:49:12.004700626 +0000 UTC m=+2253.656025028" lastFinishedPulling="2026-01-27 14:49:17.480582667 +0000 UTC m=+2259.131907069" observedRunningTime="2026-01-27 14:49:18.078601763 +0000 UTC m=+2259.729926205" watchObservedRunningTime="2026-01-27 14:49:18.093360252 +0000 UTC m=+2259.744684644" Jan 27 14:49:20 crc kubenswrapper[4833]: I0127 14:49:20.241227 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:20 crc kubenswrapper[4833]: I0127 14:49:20.241591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:20 crc kubenswrapper[4833]: I0127 14:49:20.300175 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:30 crc kubenswrapper[4833]: I0127 14:49:30.293515 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cx57v" Jan 27 14:49:30 crc kubenswrapper[4833]: I0127 14:49:30.357665 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cx57v"] Jan 27 14:49:30 crc kubenswrapper[4833]: I0127 14:49:30.425006 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tghcg"] Jan 27 14:49:30 crc kubenswrapper[4833]: I0127 14:49:30.425296 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tghcg" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="registry-server" containerID="cri-o://9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5" gracePeriod=2 Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.077356 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.176500 4833 generic.go:334] "Generic (PLEG): container finished" podID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerID="9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5" exitCode=0 Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.176573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tghcg" event={"ID":"8157c8c1-501c-43cf-a42c-1e2a48f6a038","Type":"ContainerDied","Data":"9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5"} Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.176586 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tghcg" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.176612 4833 scope.go:117] "RemoveContainer" containerID="9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.176600 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tghcg" event={"ID":"8157c8c1-501c-43cf-a42c-1e2a48f6a038","Type":"ContainerDied","Data":"4345fbf73c4344bc051cf19c9afd2b81404b06ac1bc13841bd328120efe95812"} Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.178696 4833 generic.go:334] "Generic (PLEG): container finished" podID="01ef9ac5-6b63-4441-ab5e-d700019bbe30" containerID="265074e3c014775157ab38066e59439ee92198e38505984f1a7393d5941705d6" exitCode=0 Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.178785 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" event={"ID":"01ef9ac5-6b63-4441-ab5e-d700019bbe30","Type":"ContainerDied","Data":"265074e3c014775157ab38066e59439ee92198e38505984f1a7393d5941705d6"} Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.198229 4833 scope.go:117] "RemoveContainer" containerID="c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.232081 4833 scope.go:117] "RemoveContainer" containerID="92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.245487 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-catalog-content\") pod \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.245632 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwb2h\" (UniqueName: \"kubernetes.io/projected/8157c8c1-501c-43cf-a42c-1e2a48f6a038-kube-api-access-lwb2h\") pod \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.245797 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-utilities\") pod \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\" (UID: \"8157c8c1-501c-43cf-a42c-1e2a48f6a038\") " Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.250319 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-utilities" (OuterVolumeSpecName: "utilities") pod "8157c8c1-501c-43cf-a42c-1e2a48f6a038" (UID: "8157c8c1-501c-43cf-a42c-1e2a48f6a038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.253336 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8157c8c1-501c-43cf-a42c-1e2a48f6a038-kube-api-access-lwb2h" (OuterVolumeSpecName: "kube-api-access-lwb2h") pod "8157c8c1-501c-43cf-a42c-1e2a48f6a038" (UID: "8157c8c1-501c-43cf-a42c-1e2a48f6a038"). InnerVolumeSpecName "kube-api-access-lwb2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.319619 4833 scope.go:117] "RemoveContainer" containerID="9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.319996 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8157c8c1-501c-43cf-a42c-1e2a48f6a038" (UID: "8157c8c1-501c-43cf-a42c-1e2a48f6a038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:49:31 crc kubenswrapper[4833]: E0127 14:49:31.320953 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5\": container with ID starting with 9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5 not found: ID does not exist" containerID="9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.320992 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5"} err="failed to get container status \"9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5\": rpc error: code = NotFound desc = could not find container \"9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5\": container with ID starting with 9b3ac3b6ac0fc7afd028f42b7643a4768ca3493c5abbe6d98846ffb1dd1a85f5 not found: ID does not exist" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.321011 4833 scope.go:117] "RemoveContainer" containerID="c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6" Jan 27 14:49:31 crc kubenswrapper[4833]: E0127 14:49:31.324518 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6\": container with ID starting with c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6 not found: ID does not exist" containerID="c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.324545 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6"} err="failed to get container status \"c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6\": rpc error: code = NotFound desc = could not find container \"c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6\": container with ID starting with c62ba402659183a686c9024e53b0993077cd831afccc61be671ba5202fb4acd6 not found: ID does not exist" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.324559 4833 scope.go:117] "RemoveContainer" containerID="92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053" Jan 27 14:49:31 crc kubenswrapper[4833]: E0127 14:49:31.325394 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053\": container with ID starting with 92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053 not found: ID does not exist" containerID="92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.325412 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053"} err="failed to get container status \"92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053\": rpc error: code = NotFound desc = could not find container \"92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053\": container with ID starting with 92de96c72dd00037f533722d050f47775697a4e34f4a2297339dcddc254fe053 not found: ID does not exist" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.348039 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.348071 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwb2h\" (UniqueName: \"kubernetes.io/projected/8157c8c1-501c-43cf-a42c-1e2a48f6a038-kube-api-access-lwb2h\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.348082 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8157c8c1-501c-43cf-a42c-1e2a48f6a038-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.514530 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tghcg"] Jan 27 14:49:31 crc kubenswrapper[4833]: I0127 14:49:31.522207 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tghcg"] Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.260587 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.260684 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.260751 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.261813 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.261917 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" gracePeriod=600 Jan 27 14:49:32 crc kubenswrapper[4833]: E0127 14:49:32.504994 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.727707 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.879835 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-ssh-key-openstack-edpm-ipam\") pod \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.879943 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p5pp\" (UniqueName: \"kubernetes.io/projected/01ef9ac5-6b63-4441-ab5e-d700019bbe30-kube-api-access-9p5pp\") pod \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.879970 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-inventory\") pod \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.879999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-nova-metadata-neutron-config-0\") pod \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.880128 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-metadata-combined-ca-bundle\") pod \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.880276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-ovn-metadata-agent-neutron-config-0\") pod \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\" (UID: \"01ef9ac5-6b63-4441-ab5e-d700019bbe30\") " Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.885623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ef9ac5-6b63-4441-ab5e-d700019bbe30-kube-api-access-9p5pp" (OuterVolumeSpecName: "kube-api-access-9p5pp") pod "01ef9ac5-6b63-4441-ab5e-d700019bbe30" (UID: "01ef9ac5-6b63-4441-ab5e-d700019bbe30"). InnerVolumeSpecName "kube-api-access-9p5pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.897359 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "01ef9ac5-6b63-4441-ab5e-d700019bbe30" (UID: "01ef9ac5-6b63-4441-ab5e-d700019bbe30"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.921703 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "01ef9ac5-6b63-4441-ab5e-d700019bbe30" (UID: "01ef9ac5-6b63-4441-ab5e-d700019bbe30"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.933728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01ef9ac5-6b63-4441-ab5e-d700019bbe30" (UID: "01ef9ac5-6b63-4441-ab5e-d700019bbe30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.951608 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-inventory" (OuterVolumeSpecName: "inventory") pod "01ef9ac5-6b63-4441-ab5e-d700019bbe30" (UID: "01ef9ac5-6b63-4441-ab5e-d700019bbe30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.981603 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "01ef9ac5-6b63-4441-ab5e-d700019bbe30" (UID: "01ef9ac5-6b63-4441-ab5e-d700019bbe30"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.982417 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.982528 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.982551 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p5pp\" (UniqueName: \"kubernetes.io/projected/01ef9ac5-6b63-4441-ab5e-d700019bbe30-kube-api-access-9p5pp\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.982564 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.982576 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:32 crc kubenswrapper[4833]: I0127 14:49:32.982588 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ef9ac5-6b63-4441-ab5e-d700019bbe30-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.204490 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.204489 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm" event={"ID":"01ef9ac5-6b63-4441-ab5e-d700019bbe30","Type":"ContainerDied","Data":"809c0dce9bf1c302137e4089f1dcb6c2aacf6966e4b0b5b94fd4997fdf89a5d7"} Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.204626 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="809c0dce9bf1c302137e4089f1dcb6c2aacf6966e4b0b5b94fd4997fdf89a5d7" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.207359 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" exitCode=0 Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.207423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1"} Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.207499 4833 scope.go:117] "RemoveContainer" containerID="b93fa008204f075160dd00a1ffe7bef2d616079e42f06d1e6e87a728b29b2ba0" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.207821 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:49:33 crc kubenswrapper[4833]: E0127 14:49:33.208131 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.220827 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" path="/var/lib/kubelet/pods/8157c8c1-501c-43cf-a42c-1e2a48f6a038/volumes" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.313835 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569"] Jan 27 14:49:33 crc kubenswrapper[4833]: E0127 14:49:33.314330 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="extract-utilities" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.314350 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="extract-utilities" Jan 27 14:49:33 crc kubenswrapper[4833]: E0127 14:49:33.314372 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="extract-content" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.314381 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="extract-content" Jan 27 14:49:33 crc kubenswrapper[4833]: E0127 14:49:33.314410 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="registry-server" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.314420 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="registry-server" Jan 27 14:49:33 crc kubenswrapper[4833]: E0127 14:49:33.314436 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ef9ac5-6b63-4441-ab5e-d700019bbe30" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.314461 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ef9ac5-6b63-4441-ab5e-d700019bbe30" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.314670 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ef9ac5-6b63-4441-ab5e-d700019bbe30" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.314680 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8157c8c1-501c-43cf-a42c-1e2a48f6a038" containerName="registry-server" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.315375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.320698 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.321062 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.321132 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.321158 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.321938 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.328485 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569"] Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.492808 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.493198 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.493301 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.493429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.493547 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkkrt\" (UniqueName: \"kubernetes.io/projected/f18f58a2-a7ce-4714-838e-47e089f59cff-kube-api-access-kkkrt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.595173 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.595576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkkrt\" (UniqueName: \"kubernetes.io/projected/f18f58a2-a7ce-4714-838e-47e089f59cff-kube-api-access-kkkrt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.595704 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.595801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.595836 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.600182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.600726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.601035 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.601129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.618013 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkkrt\" (UniqueName: \"kubernetes.io/projected/f18f58a2-a7ce-4714-838e-47e089f59cff-kube-api-access-kkkrt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mg569\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:33 crc kubenswrapper[4833]: I0127 14:49:33.633488 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:49:34 crc kubenswrapper[4833]: I0127 14:49:34.163270 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569"] Jan 27 14:49:34 crc kubenswrapper[4833]: I0127 14:49:34.220022 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" event={"ID":"f18f58a2-a7ce-4714-838e-47e089f59cff","Type":"ContainerStarted","Data":"9e3cb0a88233cc5cf73baaf8bb1a43d89940b3964e44c03a19164159f2059ec2"} Jan 27 14:49:35 crc kubenswrapper[4833]: I0127 14:49:35.229168 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" event={"ID":"f18f58a2-a7ce-4714-838e-47e089f59cff","Type":"ContainerStarted","Data":"4a04067b2730eff69b5598e26e99d66504baf95926b0be9bfc74743d60f0d275"} Jan 27 14:49:35 crc kubenswrapper[4833]: I0127 14:49:35.253065 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" podStartSLOduration=1.756690339 podStartE2EDuration="2.253047008s" podCreationTimestamp="2026-01-27 14:49:33 +0000 UTC" firstStartedPulling="2026-01-27 14:49:34.16349363 +0000 UTC m=+2275.814818032" lastFinishedPulling="2026-01-27 14:49:34.659850299 +0000 UTC m=+2276.311174701" observedRunningTime="2026-01-27 14:49:35.244889838 +0000 UTC m=+2276.896214250" watchObservedRunningTime="2026-01-27 14:49:35.253047008 +0000 UTC m=+2276.904371410" Jan 27 14:49:46 crc kubenswrapper[4833]: I0127 14:49:46.210741 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:49:46 crc kubenswrapper[4833]: E0127 14:49:46.211410 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:50:01 crc kubenswrapper[4833]: I0127 14:50:01.212367 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:50:01 crc kubenswrapper[4833]: E0127 14:50:01.218878 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:50:16 crc kubenswrapper[4833]: I0127 14:50:16.211565 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:50:16 crc kubenswrapper[4833]: E0127 14:50:16.212275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:50:28 crc kubenswrapper[4833]: I0127 14:50:28.211167 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:50:28 crc kubenswrapper[4833]: E0127 14:50:28.213169 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:50:42 crc kubenswrapper[4833]: I0127 14:50:42.211149 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:50:42 crc kubenswrapper[4833]: E0127 14:50:42.211960 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:50:57 crc kubenswrapper[4833]: I0127 14:50:57.211747 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:50:57 crc kubenswrapper[4833]: E0127 14:50:57.213041 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:51:08 crc kubenswrapper[4833]: I0127 14:51:08.211131 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:51:08 crc kubenswrapper[4833]: E0127 14:51:08.212438 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:51:20 crc kubenswrapper[4833]: I0127 14:51:20.210547 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:51:20 crc kubenswrapper[4833]: E0127 14:51:20.211578 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.401682 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbk6"] Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.405087 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.436335 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbk6"] Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.573667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-utilities\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.573769 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-catalog-content\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.573845 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgpk2\" (UniqueName: \"kubernetes.io/projected/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-kube-api-access-lgpk2\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.676009 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgpk2\" (UniqueName: \"kubernetes.io/projected/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-kube-api-access-lgpk2\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.676223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-utilities\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.676262 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-catalog-content\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.676770 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-utilities\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.676814 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-catalog-content\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.697186 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgpk2\" (UniqueName: \"kubernetes.io/projected/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-kube-api-access-lgpk2\") pod \"redhat-marketplace-vkbk6\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:32 crc kubenswrapper[4833]: I0127 14:51:32.731478 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:33 crc kubenswrapper[4833]: I0127 14:51:33.199954 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbk6"] Jan 27 14:51:33 crc kubenswrapper[4833]: I0127 14:51:33.674569 4833 generic.go:334] "Generic (PLEG): container finished" podID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerID="bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767" exitCode=0 Jan 27 14:51:33 crc kubenswrapper[4833]: I0127 14:51:33.674629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerDied","Data":"bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767"} Jan 27 14:51:33 crc kubenswrapper[4833]: I0127 14:51:33.674658 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerStarted","Data":"30358d97481b5790b99aad3dc144044fcbe7b73ac8be893557a3630abfe3b0e3"} Jan 27 14:51:33 crc kubenswrapper[4833]: I0127 14:51:33.678481 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:51:34 crc kubenswrapper[4833]: I0127 14:51:34.685226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerStarted","Data":"5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669"} Jan 27 14:51:35 crc kubenswrapper[4833]: I0127 14:51:35.211109 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:51:35 crc kubenswrapper[4833]: E0127 14:51:35.211626 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:51:35 crc kubenswrapper[4833]: I0127 14:51:35.696901 4833 generic.go:334] "Generic (PLEG): container finished" podID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerID="5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669" exitCode=0 Jan 27 14:51:35 crc kubenswrapper[4833]: I0127 14:51:35.696954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerDied","Data":"5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669"} Jan 27 14:51:36 crc kubenswrapper[4833]: I0127 14:51:36.706874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerStarted","Data":"a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3"} Jan 27 14:51:36 crc kubenswrapper[4833]: I0127 14:51:36.730163 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkbk6" podStartSLOduration=2.318893048 podStartE2EDuration="4.730143025s" podCreationTimestamp="2026-01-27 14:51:32 +0000 UTC" firstStartedPulling="2026-01-27 14:51:33.678221526 +0000 UTC m=+2395.329545928" lastFinishedPulling="2026-01-27 14:51:36.089471513 +0000 UTC m=+2397.740795905" observedRunningTime="2026-01-27 14:51:36.726576727 +0000 UTC m=+2398.377901139" watchObservedRunningTime="2026-01-27 14:51:36.730143025 +0000 UTC m=+2398.381467427" Jan 27 14:51:42 crc kubenswrapper[4833]: I0127 14:51:42.731854 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:42 crc kubenswrapper[4833]: I0127 14:51:42.732408 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:42 crc kubenswrapper[4833]: I0127 14:51:42.781666 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:42 crc kubenswrapper[4833]: I0127 14:51:42.831396 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:43 crc kubenswrapper[4833]: I0127 14:51:43.014199 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbk6"] Jan 27 14:51:44 crc kubenswrapper[4833]: I0127 14:51:44.775792 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkbk6" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="registry-server" containerID="cri-o://a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3" gracePeriod=2 Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.283040 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.329908 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-catalog-content\") pod \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.434755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgpk2\" (UniqueName: \"kubernetes.io/projected/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-kube-api-access-lgpk2\") pod \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.434808 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-utilities\") pod \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\" (UID: \"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9\") " Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.435593 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-utilities" (OuterVolumeSpecName: "utilities") pod "6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" (UID: "6da6a0a4-7176-4389-b8f3-1535a7d1d8c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.436052 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.440553 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-kube-api-access-lgpk2" (OuterVolumeSpecName: "kube-api-access-lgpk2") pod "6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" (UID: "6da6a0a4-7176-4389-b8f3-1535a7d1d8c9"). InnerVolumeSpecName "kube-api-access-lgpk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.538062 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgpk2\" (UniqueName: \"kubernetes.io/projected/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-kube-api-access-lgpk2\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.786780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" (UID: "6da6a0a4-7176-4389-b8f3-1535a7d1d8c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.788986 4833 generic.go:334] "Generic (PLEG): container finished" podID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerID="a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3" exitCode=0 Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.789040 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerDied","Data":"a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3"} Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.789070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkbk6" event={"ID":"6da6a0a4-7176-4389-b8f3-1535a7d1d8c9","Type":"ContainerDied","Data":"30358d97481b5790b99aad3dc144044fcbe7b73ac8be893557a3630abfe3b0e3"} Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.789089 4833 scope.go:117] "RemoveContainer" containerID="a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.789269 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkbk6" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.847059 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.851760 4833 scope.go:117] "RemoveContainer" containerID="5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.864529 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbk6"] Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.898398 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkbk6"] Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.913815 4833 scope.go:117] "RemoveContainer" containerID="bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.948929 4833 scope.go:117] "RemoveContainer" containerID="a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3" Jan 27 14:51:45 crc kubenswrapper[4833]: E0127 14:51:45.949879 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3\": container with ID starting with a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3 not found: ID does not exist" containerID="a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.949919 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3"} err="failed to get container status \"a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3\": rpc error: code = NotFound desc = could not find container \"a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3\": container with ID starting with a89c0440c9dd62447a6b0ff50f9c0701769cd4ce3977dddd866a2898a6e08cb3 not found: ID does not exist" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.949943 4833 scope.go:117] "RemoveContainer" containerID="5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669" Jan 27 14:51:45 crc kubenswrapper[4833]: E0127 14:51:45.951657 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669\": container with ID starting with 5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669 not found: ID does not exist" containerID="5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.951700 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669"} err="failed to get container status \"5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669\": rpc error: code = NotFound desc = could not find container \"5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669\": container with ID starting with 5754825105f1dfe75108a694d4fba53d7193aead1ac09f67b10313c5147c4669 not found: ID does not exist" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.951733 4833 scope.go:117] "RemoveContainer" containerID="bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767" Jan 27 14:51:45 crc kubenswrapper[4833]: E0127 14:51:45.952115 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767\": container with ID starting with bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767 not found: ID does not exist" containerID="bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767" Jan 27 14:51:45 crc kubenswrapper[4833]: I0127 14:51:45.952152 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767"} err="failed to get container status \"bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767\": rpc error: code = NotFound desc = could not find container \"bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767\": container with ID starting with bd2a707552697669396ca756d02be2b349daa7a412b30279734619561a7f4767 not found: ID does not exist" Jan 27 14:51:47 crc kubenswrapper[4833]: I0127 14:51:47.220783 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" path="/var/lib/kubelet/pods/6da6a0a4-7176-4389-b8f3-1535a7d1d8c9/volumes" Jan 27 14:51:48 crc kubenswrapper[4833]: I0127 14:51:48.211534 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:51:48 crc kubenswrapper[4833]: E0127 14:51:48.211991 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:52:01 crc kubenswrapper[4833]: I0127 14:52:01.211186 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:52:01 crc kubenswrapper[4833]: E0127 14:52:01.211954 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:52:16 crc kubenswrapper[4833]: I0127 14:52:16.211316 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:52:16 crc kubenswrapper[4833]: E0127 14:52:16.212275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:52:27 crc kubenswrapper[4833]: I0127 14:52:27.210294 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:52:27 crc kubenswrapper[4833]: E0127 14:52:27.211017 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:52:40 crc kubenswrapper[4833]: I0127 14:52:40.211265 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:52:40 crc kubenswrapper[4833]: E0127 14:52:40.211904 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:52:51 crc kubenswrapper[4833]: I0127 14:52:51.211994 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:52:51 crc kubenswrapper[4833]: E0127 14:52:51.213316 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:53:06 crc kubenswrapper[4833]: I0127 14:53:06.211854 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:53:06 crc kubenswrapper[4833]: E0127 14:53:06.213205 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:53:17 crc kubenswrapper[4833]: I0127 14:53:17.214497 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:53:17 crc kubenswrapper[4833]: E0127 14:53:17.215255 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:53:31 crc kubenswrapper[4833]: I0127 14:53:31.210223 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:53:31 crc kubenswrapper[4833]: E0127 14:53:31.211136 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:53:46 crc kubenswrapper[4833]: I0127 14:53:46.212558 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:53:46 crc kubenswrapper[4833]: E0127 14:53:46.213292 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:53:57 crc kubenswrapper[4833]: I0127 14:53:57.210907 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:53:57 crc kubenswrapper[4833]: E0127 14:53:57.211610 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:54:01 crc kubenswrapper[4833]: I0127 14:54:01.264886 4833 generic.go:334] "Generic (PLEG): container finished" podID="f18f58a2-a7ce-4714-838e-47e089f59cff" containerID="4a04067b2730eff69b5598e26e99d66504baf95926b0be9bfc74743d60f0d275" exitCode=0 Jan 27 14:54:01 crc kubenswrapper[4833]: I0127 14:54:01.264973 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" event={"ID":"f18f58a2-a7ce-4714-838e-47e089f59cff","Type":"ContainerDied","Data":"4a04067b2730eff69b5598e26e99d66504baf95926b0be9bfc74743d60f0d275"} Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.664737 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.784610 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-inventory\") pod \"f18f58a2-a7ce-4714-838e-47e089f59cff\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.784781 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-ssh-key-openstack-edpm-ipam\") pod \"f18f58a2-a7ce-4714-838e-47e089f59cff\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.785357 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-combined-ca-bundle\") pod \"f18f58a2-a7ce-4714-838e-47e089f59cff\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.785508 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-secret-0\") pod \"f18f58a2-a7ce-4714-838e-47e089f59cff\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.785534 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkkrt\" (UniqueName: \"kubernetes.io/projected/f18f58a2-a7ce-4714-838e-47e089f59cff-kube-api-access-kkkrt\") pod \"f18f58a2-a7ce-4714-838e-47e089f59cff\" (UID: \"f18f58a2-a7ce-4714-838e-47e089f59cff\") " Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.789866 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18f58a2-a7ce-4714-838e-47e089f59cff-kube-api-access-kkkrt" (OuterVolumeSpecName: "kube-api-access-kkkrt") pod "f18f58a2-a7ce-4714-838e-47e089f59cff" (UID: "f18f58a2-a7ce-4714-838e-47e089f59cff"). InnerVolumeSpecName "kube-api-access-kkkrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.790379 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f18f58a2-a7ce-4714-838e-47e089f59cff" (UID: "f18f58a2-a7ce-4714-838e-47e089f59cff"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.815740 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f18f58a2-a7ce-4714-838e-47e089f59cff" (UID: "f18f58a2-a7ce-4714-838e-47e089f59cff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.825559 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "f18f58a2-a7ce-4714-838e-47e089f59cff" (UID: "f18f58a2-a7ce-4714-838e-47e089f59cff"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.826621 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-inventory" (OuterVolumeSpecName: "inventory") pod "f18f58a2-a7ce-4714-838e-47e089f59cff" (UID: "f18f58a2-a7ce-4714-838e-47e089f59cff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.888511 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.888565 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.888585 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.888602 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f18f58a2-a7ce-4714-838e-47e089f59cff-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:02 crc kubenswrapper[4833]: I0127 14:54:02.888616 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkkrt\" (UniqueName: \"kubernetes.io/projected/f18f58a2-a7ce-4714-838e-47e089f59cff-kube-api-access-kkkrt\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.285657 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" event={"ID":"f18f58a2-a7ce-4714-838e-47e089f59cff","Type":"ContainerDied","Data":"9e3cb0a88233cc5cf73baaf8bb1a43d89940b3964e44c03a19164159f2059ec2"} Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.285907 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mg569" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.285917 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3cb0a88233cc5cf73baaf8bb1a43d89940b3964e44c03a19164159f2059ec2" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.393589 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r"] Jan 27 14:54:03 crc kubenswrapper[4833]: E0127 14:54:03.393959 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18f58a2-a7ce-4714-838e-47e089f59cff" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.393973 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18f58a2-a7ce-4714-838e-47e089f59cff" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:54:03 crc kubenswrapper[4833]: E0127 14:54:03.393984 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="extract-utilities" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.393991 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="extract-utilities" Jan 27 14:54:03 crc kubenswrapper[4833]: E0127 14:54:03.394013 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="registry-server" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.394020 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="registry-server" Jan 27 14:54:03 crc kubenswrapper[4833]: E0127 14:54:03.394037 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="extract-content" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.394043 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="extract-content" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.394233 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da6a0a4-7176-4389-b8f3-1535a7d1d8c9" containerName="registry-server" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.394252 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18f58a2-a7ce-4714-838e-47e089f59cff" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.394909 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399051 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399080 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399224 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399295 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399407 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399542 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.399631 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.403690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r"] Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500078 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500104 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f68646f8-f595-44de-898d-94a98ffb6408-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjbr\" (UniqueName: \"kubernetes.io/projected/f68646f8-f595-44de-898d-94a98ffb6408-kube-api-access-wqjbr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500291 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.500344 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602544 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602617 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602646 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602699 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602729 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f68646f8-f595-44de-898d-94a98ffb6408-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.602773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqjbr\" (UniqueName: \"kubernetes.io/projected/f68646f8-f595-44de-898d-94a98ffb6408-kube-api-access-wqjbr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.603630 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f68646f8-f595-44de-898d-94a98ffb6408-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.607112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.607117 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.608435 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.608453 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.609029 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.609166 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.609987 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.634327 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqjbr\" (UniqueName: \"kubernetes.io/projected/f68646f8-f595-44de-898d-94a98ffb6408-kube-api-access-wqjbr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-n889r\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:03 crc kubenswrapper[4833]: I0127 14:54:03.715559 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:54:04 crc kubenswrapper[4833]: I0127 14:54:04.233109 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r"] Jan 27 14:54:04 crc kubenswrapper[4833]: W0127 14:54:04.241426 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf68646f8_f595_44de_898d_94a98ffb6408.slice/crio-e22c137b6ca75f47deb1130791dd3e3888f824d1c8bad45c39cdbedd600ab094 WatchSource:0}: Error finding container e22c137b6ca75f47deb1130791dd3e3888f824d1c8bad45c39cdbedd600ab094: Status 404 returned error can't find the container with id e22c137b6ca75f47deb1130791dd3e3888f824d1c8bad45c39cdbedd600ab094 Jan 27 14:54:04 crc kubenswrapper[4833]: I0127 14:54:04.298551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" event={"ID":"f68646f8-f595-44de-898d-94a98ffb6408","Type":"ContainerStarted","Data":"e22c137b6ca75f47deb1130791dd3e3888f824d1c8bad45c39cdbedd600ab094"} Jan 27 14:54:05 crc kubenswrapper[4833]: I0127 14:54:05.312294 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" event={"ID":"f68646f8-f595-44de-898d-94a98ffb6408","Type":"ContainerStarted","Data":"f03d4dc6a550f117bee1604cbafe0645b07d8fb94732823eb97a5a8394e608a4"} Jan 27 14:54:05 crc kubenswrapper[4833]: I0127 14:54:05.351340 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" podStartSLOduration=1.725376884 podStartE2EDuration="2.351312107s" podCreationTimestamp="2026-01-27 14:54:03 +0000 UTC" firstStartedPulling="2026-01-27 14:54:04.243473791 +0000 UTC m=+2545.894798193" lastFinishedPulling="2026-01-27 14:54:04.869409024 +0000 UTC m=+2546.520733416" observedRunningTime="2026-01-27 14:54:05.329697794 +0000 UTC m=+2546.981022216" watchObservedRunningTime="2026-01-27 14:54:05.351312107 +0000 UTC m=+2547.002636529" Jan 27 14:54:11 crc kubenswrapper[4833]: I0127 14:54:11.211597 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:54:11 crc kubenswrapper[4833]: E0127 14:54:11.212895 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:54:22 crc kubenswrapper[4833]: I0127 14:54:22.210942 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:54:22 crc kubenswrapper[4833]: E0127 14:54:22.211907 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 14:54:36 crc kubenswrapper[4833]: I0127 14:54:36.211778 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:54:36 crc kubenswrapper[4833]: I0127 14:54:36.623669 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"278197ced7e604bd2cdde46a41995a5b9d2b4edbcbbe51f8760afc35b02a8139"} Jan 27 14:56:37 crc kubenswrapper[4833]: I0127 14:56:37.771842 4833 generic.go:334] "Generic (PLEG): container finished" podID="f68646f8-f595-44de-898d-94a98ffb6408" containerID="f03d4dc6a550f117bee1604cbafe0645b07d8fb94732823eb97a5a8394e608a4" exitCode=0 Jan 27 14:56:37 crc kubenswrapper[4833]: I0127 14:56:37.771917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" event={"ID":"f68646f8-f595-44de-898d-94a98ffb6408","Type":"ContainerDied","Data":"f03d4dc6a550f117bee1604cbafe0645b07d8fb94732823eb97a5a8394e608a4"} Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.243217 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.367786 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqjbr\" (UniqueName: \"kubernetes.io/projected/f68646f8-f595-44de-898d-94a98ffb6408-kube-api-access-wqjbr\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.367913 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f68646f8-f595-44de-898d-94a98ffb6408-nova-extra-config-0\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.367935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-combined-ca-bundle\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.368022 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-0\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.368069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-1\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.368088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-inventory\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.368187 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-ssh-key-openstack-edpm-ipam\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.368223 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-0\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.368250 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-1\") pod \"f68646f8-f595-44de-898d-94a98ffb6408\" (UID: \"f68646f8-f595-44de-898d-94a98ffb6408\") " Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.375137 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f68646f8-f595-44de-898d-94a98ffb6408-kube-api-access-wqjbr" (OuterVolumeSpecName: "kube-api-access-wqjbr") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "kube-api-access-wqjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.389744 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.398867 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.399841 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.404738 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.407072 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f68646f8-f595-44de-898d-94a98ffb6408-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.410406 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.418086 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.418399 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-inventory" (OuterVolumeSpecName: "inventory") pod "f68646f8-f595-44de-898d-94a98ffb6408" (UID: "f68646f8-f595-44de-898d-94a98ffb6408"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470681 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470711 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470722 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470732 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470743 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470752 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470760 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqjbr\" (UniqueName: \"kubernetes.io/projected/f68646f8-f595-44de-898d-94a98ffb6408-kube-api-access-wqjbr\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470769 4833 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f68646f8-f595-44de-898d-94a98ffb6408-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.470777 4833 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f68646f8-f595-44de-898d-94a98ffb6408-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.825213 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" event={"ID":"f68646f8-f595-44de-898d-94a98ffb6408","Type":"ContainerDied","Data":"e22c137b6ca75f47deb1130791dd3e3888f824d1c8bad45c39cdbedd600ab094"} Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.825292 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22c137b6ca75f47deb1130791dd3e3888f824d1c8bad45c39cdbedd600ab094" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.825424 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-n889r" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.908719 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n"] Jan 27 14:56:39 crc kubenswrapper[4833]: E0127 14:56:39.909170 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f68646f8-f595-44de-898d-94a98ffb6408" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.909188 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f68646f8-f595-44de-898d-94a98ffb6408" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.909392 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f68646f8-f595-44de-898d-94a98ffb6408" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.910168 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.912641 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.912747 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l29rn" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.912829 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.913053 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.914011 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.927715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n"] Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.980739 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.980879 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.980916 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.980989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.981076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.981149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:39 crc kubenswrapper[4833]: I0127 14:56:39.981266 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ffm\" (UniqueName: \"kubernetes.io/projected/de21b54e-baa7-4329-afa1-44caba34567e-kube-api-access-t6ffm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082536 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082572 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6ffm\" (UniqueName: \"kubernetes.io/projected/de21b54e-baa7-4329-afa1-44caba34567e-kube-api-access-t6ffm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.082810 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.087520 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.088136 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.088378 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.089051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.089278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.091619 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.099054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6ffm\" (UniqueName: \"kubernetes.io/projected/de21b54e-baa7-4329-afa1-44caba34567e-kube-api-access-t6ffm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.232611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.638563 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4d4l"] Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.641708 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.657308 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4d4l"] Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.797978 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-catalog-content\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.798064 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/e36cc74a-4cc0-4997-b9c4-074b6ea81941-kube-api-access-2vq88\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.798245 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-utilities\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.827001 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n"] Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.835014 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.902891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-catalog-content\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.903334 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/e36cc74a-4cc0-4997-b9c4-074b6ea81941-kube-api-access-2vq88\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.903379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-catalog-content\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.903828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-utilities\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.904635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-utilities\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.934226 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/e36cc74a-4cc0-4997-b9c4-074b6ea81941-kube-api-access-2vq88\") pod \"redhat-operators-g4d4l\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:40 crc kubenswrapper[4833]: I0127 14:56:40.966765 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.435391 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4d4l"] Jan 27 14:56:41 crc kubenswrapper[4833]: W0127 14:56:41.438581 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36cc74a_4cc0_4997_b9c4_074b6ea81941.slice/crio-3d0ac71745053bfc320abde0a30ce4aed62e9f7b3b3a8aa1b0bd8cfe76927417 WatchSource:0}: Error finding container 3d0ac71745053bfc320abde0a30ce4aed62e9f7b3b3a8aa1b0bd8cfe76927417: Status 404 returned error can't find the container with id 3d0ac71745053bfc320abde0a30ce4aed62e9f7b3b3a8aa1b0bd8cfe76927417 Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.858995 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" event={"ID":"de21b54e-baa7-4329-afa1-44caba34567e","Type":"ContainerStarted","Data":"4f4e390dd248144ef91927b24e1b6868152a25c50ad1ac2d11d9c92c658f0ab3"} Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.859401 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" event={"ID":"de21b54e-baa7-4329-afa1-44caba34567e","Type":"ContainerStarted","Data":"264766a3178cd2e468f76f5056ed3ae5f159a5e3f3f3c2af838902fbc5e6c7a0"} Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.860977 4833 generic.go:334] "Generic (PLEG): container finished" podID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerID="7ec679a39de0550780b606f60e769fe8e30771f2dd57276dae080fe569d738f9" exitCode=0 Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.861017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerDied","Data":"7ec679a39de0550780b606f60e769fe8e30771f2dd57276dae080fe569d738f9"} Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.861037 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerStarted","Data":"3d0ac71745053bfc320abde0a30ce4aed62e9f7b3b3a8aa1b0bd8cfe76927417"} Jan 27 14:56:41 crc kubenswrapper[4833]: I0127 14:56:41.886115 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" podStartSLOduration=2.213662378 podStartE2EDuration="2.886092935s" podCreationTimestamp="2026-01-27 14:56:39 +0000 UTC" firstStartedPulling="2026-01-27 14:56:40.834752562 +0000 UTC m=+2702.486076964" lastFinishedPulling="2026-01-27 14:56:41.507183119 +0000 UTC m=+2703.158507521" observedRunningTime="2026-01-27 14:56:41.876354354 +0000 UTC m=+2703.527678756" watchObservedRunningTime="2026-01-27 14:56:41.886092935 +0000 UTC m=+2703.537417337" Jan 27 14:56:42 crc kubenswrapper[4833]: I0127 14:56:42.873398 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerStarted","Data":"18373ab7d29f736d6ffda7e4f9640af911a1f5ca79fd048f392f65e9fc37f911"} Jan 27 14:56:49 crc kubenswrapper[4833]: I0127 14:56:49.956057 4833 generic.go:334] "Generic (PLEG): container finished" podID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerID="18373ab7d29f736d6ffda7e4f9640af911a1f5ca79fd048f392f65e9fc37f911" exitCode=0 Jan 27 14:56:49 crc kubenswrapper[4833]: I0127 14:56:49.956106 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerDied","Data":"18373ab7d29f736d6ffda7e4f9640af911a1f5ca79fd048f392f65e9fc37f911"} Jan 27 14:56:51 crc kubenswrapper[4833]: I0127 14:56:51.986763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerStarted","Data":"84673c17a8ce3eb7d8400c556cb2909b7ab2716ac42b5e6e2a17fc188437f7dd"} Jan 27 14:56:52 crc kubenswrapper[4833]: I0127 14:56:52.015886 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4d4l" podStartSLOduration=3.13827922 podStartE2EDuration="12.015862945s" podCreationTimestamp="2026-01-27 14:56:40 +0000 UTC" firstStartedPulling="2026-01-27 14:56:41.864329539 +0000 UTC m=+2703.515653941" lastFinishedPulling="2026-01-27 14:56:50.741913224 +0000 UTC m=+2712.393237666" observedRunningTime="2026-01-27 14:56:52.007953638 +0000 UTC m=+2713.659278050" watchObservedRunningTime="2026-01-27 14:56:52.015862945 +0000 UTC m=+2713.667187347" Jan 27 14:57:00 crc kubenswrapper[4833]: I0127 14:57:00.967774 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:57:00 crc kubenswrapper[4833]: I0127 14:57:00.968203 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:57:02 crc kubenswrapper[4833]: I0127 14:57:02.022798 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4d4l" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="registry-server" probeResult="failure" output=< Jan 27 14:57:02 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 14:57:02 crc kubenswrapper[4833]: > Jan 27 14:57:02 crc kubenswrapper[4833]: I0127 14:57:02.260877 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:57:02 crc kubenswrapper[4833]: I0127 14:57:02.260957 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:57:11 crc kubenswrapper[4833]: I0127 14:57:11.023368 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:57:11 crc kubenswrapper[4833]: I0127 14:57:11.132022 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:57:11 crc kubenswrapper[4833]: I0127 14:57:11.848781 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4d4l"] Jan 27 14:57:12 crc kubenswrapper[4833]: I0127 14:57:12.191517 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g4d4l" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="registry-server" containerID="cri-o://84673c17a8ce3eb7d8400c556cb2909b7ab2716ac42b5e6e2a17fc188437f7dd" gracePeriod=2 Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.204407 4833 generic.go:334] "Generic (PLEG): container finished" podID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerID="84673c17a8ce3eb7d8400c556cb2909b7ab2716ac42b5e6e2a17fc188437f7dd" exitCode=0 Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.204999 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerDied","Data":"84673c17a8ce3eb7d8400c556cb2909b7ab2716ac42b5e6e2a17fc188437f7dd"} Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.205042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4d4l" event={"ID":"e36cc74a-4cc0-4997-b9c4-074b6ea81941","Type":"ContainerDied","Data":"3d0ac71745053bfc320abde0a30ce4aed62e9f7b3b3a8aa1b0bd8cfe76927417"} Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.205063 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d0ac71745053bfc320abde0a30ce4aed62e9f7b3b3a8aa1b0bd8cfe76927417" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.292734 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.381162 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/e36cc74a-4cc0-4997-b9c4-074b6ea81941-kube-api-access-2vq88\") pod \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.381268 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-utilities\") pod \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.381501 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-catalog-content\") pod \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\" (UID: \"e36cc74a-4cc0-4997-b9c4-074b6ea81941\") " Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.382804 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-utilities" (OuterVolumeSpecName: "utilities") pod "e36cc74a-4cc0-4997-b9c4-074b6ea81941" (UID: "e36cc74a-4cc0-4997-b9c4-074b6ea81941"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.393686 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36cc74a-4cc0-4997-b9c4-074b6ea81941-kube-api-access-2vq88" (OuterVolumeSpecName: "kube-api-access-2vq88") pod "e36cc74a-4cc0-4997-b9c4-074b6ea81941" (UID: "e36cc74a-4cc0-4997-b9c4-074b6ea81941"). InnerVolumeSpecName "kube-api-access-2vq88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.483956 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vq88\" (UniqueName: \"kubernetes.io/projected/e36cc74a-4cc0-4997-b9c4-074b6ea81941-kube-api-access-2vq88\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.484001 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.512663 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e36cc74a-4cc0-4997-b9c4-074b6ea81941" (UID: "e36cc74a-4cc0-4997-b9c4-074b6ea81941"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:57:13 crc kubenswrapper[4833]: I0127 14:57:13.586322 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36cc74a-4cc0-4997-b9c4-074b6ea81941-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:14 crc kubenswrapper[4833]: I0127 14:57:14.218174 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4d4l" Jan 27 14:57:14 crc kubenswrapper[4833]: I0127 14:57:14.276754 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g4d4l"] Jan 27 14:57:14 crc kubenswrapper[4833]: I0127 14:57:14.291271 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g4d4l"] Jan 27 14:57:15 crc kubenswrapper[4833]: I0127 14:57:15.224678 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" path="/var/lib/kubelet/pods/e36cc74a-4cc0-4997-b9c4-074b6ea81941/volumes" Jan 27 14:57:32 crc kubenswrapper[4833]: I0127 14:57:32.261667 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:57:32 crc kubenswrapper[4833]: I0127 14:57:32.262422 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.960323 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w2klr"] Jan 27 14:58:00 crc kubenswrapper[4833]: E0127 14:58:00.961365 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="extract-utilities" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.961381 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="extract-utilities" Jan 27 14:58:00 crc kubenswrapper[4833]: E0127 14:58:00.961396 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="registry-server" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.961404 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="registry-server" Jan 27 14:58:00 crc kubenswrapper[4833]: E0127 14:58:00.961426 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="extract-content" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.961433 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="extract-content" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.961692 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36cc74a-4cc0-4997-b9c4-074b6ea81941" containerName="registry-server" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.963393 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:00 crc kubenswrapper[4833]: I0127 14:58:00.989760 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2klr"] Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.061088 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gn2\" (UniqueName: \"kubernetes.io/projected/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-kube-api-access-n9gn2\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.061427 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-catalog-content\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.061680 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-utilities\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.163528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-utilities\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.163607 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9gn2\" (UniqueName: \"kubernetes.io/projected/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-kube-api-access-n9gn2\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.163657 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-catalog-content\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.164066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-utilities\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.164128 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-catalog-content\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.186170 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9gn2\" (UniqueName: \"kubernetes.io/projected/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-kube-api-access-n9gn2\") pod \"certified-operators-w2klr\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.299213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:01 crc kubenswrapper[4833]: I0127 14:58:01.873378 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2klr"] Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.260541 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.260610 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.260658 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.261544 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"278197ced7e604bd2cdde46a41995a5b9d2b4edbcbbe51f8760afc35b02a8139"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.261612 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://278197ced7e604bd2cdde46a41995a5b9d2b4edbcbbe51f8760afc35b02a8139" gracePeriod=600 Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.719509 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="278197ced7e604bd2cdde46a41995a5b9d2b4edbcbbe51f8760afc35b02a8139" exitCode=0 Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.719876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"278197ced7e604bd2cdde46a41995a5b9d2b4edbcbbe51f8760afc35b02a8139"} Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.719903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862"} Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.719918 4833 scope.go:117] "RemoveContainer" containerID="b6adba2ee41372e06b6fcf8636c145477f36f1d8d9b1be6d166bbf252df45fa1" Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.723228 4833 generic.go:334] "Generic (PLEG): container finished" podID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerID="ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087" exitCode=0 Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.723264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerDied","Data":"ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087"} Jan 27 14:58:02 crc kubenswrapper[4833]: I0127 14:58:02.723284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerStarted","Data":"67e7a49590dd085a66b78f4d1147d7a1f5b28ed96599b84829f37568deb9f2f2"} Jan 27 14:58:03 crc kubenswrapper[4833]: I0127 14:58:03.736356 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerStarted","Data":"dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf"} Jan 27 14:58:04 crc kubenswrapper[4833]: I0127 14:58:04.747096 4833 generic.go:334] "Generic (PLEG): container finished" podID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerID="dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf" exitCode=0 Jan 27 14:58:04 crc kubenswrapper[4833]: I0127 14:58:04.747198 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerDied","Data":"dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf"} Jan 27 14:58:05 crc kubenswrapper[4833]: I0127 14:58:05.760291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerStarted","Data":"b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a"} Jan 27 14:58:05 crc kubenswrapper[4833]: I0127 14:58:05.786730 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w2klr" podStartSLOduration=3.2713026210000002 podStartE2EDuration="5.786712122s" podCreationTimestamp="2026-01-27 14:58:00 +0000 UTC" firstStartedPulling="2026-01-27 14:58:02.724622871 +0000 UTC m=+2784.375947283" lastFinishedPulling="2026-01-27 14:58:05.240032382 +0000 UTC m=+2786.891356784" observedRunningTime="2026-01-27 14:58:05.783001165 +0000 UTC m=+2787.434325587" watchObservedRunningTime="2026-01-27 14:58:05.786712122 +0000 UTC m=+2787.438036534" Jan 27 14:58:11 crc kubenswrapper[4833]: I0127 14:58:11.300043 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:11 crc kubenswrapper[4833]: I0127 14:58:11.300682 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:11 crc kubenswrapper[4833]: I0127 14:58:11.363367 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:11 crc kubenswrapper[4833]: I0127 14:58:11.891965 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:11 crc kubenswrapper[4833]: I0127 14:58:11.943754 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2klr"] Jan 27 14:58:13 crc kubenswrapper[4833]: I0127 14:58:13.849023 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w2klr" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="registry-server" containerID="cri-o://b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a" gracePeriod=2 Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.318781 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.340359 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-utilities\") pod \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.340588 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gn2\" (UniqueName: \"kubernetes.io/projected/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-kube-api-access-n9gn2\") pod \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.340767 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-catalog-content\") pod \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\" (UID: \"b6fd2dd1-5c2a-421d-92ee-b12904f72f61\") " Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.342320 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-utilities" (OuterVolumeSpecName: "utilities") pod "b6fd2dd1-5c2a-421d-92ee-b12904f72f61" (UID: "b6fd2dd1-5c2a-421d-92ee-b12904f72f61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.342971 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.348752 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-kube-api-access-n9gn2" (OuterVolumeSpecName: "kube-api-access-n9gn2") pod "b6fd2dd1-5c2a-421d-92ee-b12904f72f61" (UID: "b6fd2dd1-5c2a-421d-92ee-b12904f72f61"). InnerVolumeSpecName "kube-api-access-n9gn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.394020 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6fd2dd1-5c2a-421d-92ee-b12904f72f61" (UID: "b6fd2dd1-5c2a-421d-92ee-b12904f72f61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.445374 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.445410 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9gn2\" (UniqueName: \"kubernetes.io/projected/b6fd2dd1-5c2a-421d-92ee-b12904f72f61-kube-api-access-n9gn2\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.860544 4833 generic.go:334] "Generic (PLEG): container finished" podID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerID="b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a" exitCode=0 Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.860632 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerDied","Data":"b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a"} Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.861273 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2klr" event={"ID":"b6fd2dd1-5c2a-421d-92ee-b12904f72f61","Type":"ContainerDied","Data":"67e7a49590dd085a66b78f4d1147d7a1f5b28ed96599b84829f37568deb9f2f2"} Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.861349 4833 scope.go:117] "RemoveContainer" containerID="b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.860695 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2klr" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.902350 4833 scope.go:117] "RemoveContainer" containerID="dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.902827 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2klr"] Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.921255 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w2klr"] Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.927531 4833 scope.go:117] "RemoveContainer" containerID="ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.964484 4833 scope.go:117] "RemoveContainer" containerID="b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a" Jan 27 14:58:14 crc kubenswrapper[4833]: E0127 14:58:14.965040 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a\": container with ID starting with b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a not found: ID does not exist" containerID="b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.965109 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a"} err="failed to get container status \"b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a\": rpc error: code = NotFound desc = could not find container \"b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a\": container with ID starting with b878893e1f53456a02a357b740c56bf4acda63cdecbbadf451229a1b6390746a not found: ID does not exist" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.965203 4833 scope.go:117] "RemoveContainer" containerID="dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf" Jan 27 14:58:14 crc kubenswrapper[4833]: E0127 14:58:14.965890 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf\": container with ID starting with dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf not found: ID does not exist" containerID="dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.965926 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf"} err="failed to get container status \"dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf\": rpc error: code = NotFound desc = could not find container \"dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf\": container with ID starting with dfbb3d7d9df34279f2407ebd2ae4e938ca46af759ca336e303927c3f34f0fdbf not found: ID does not exist" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.965947 4833 scope.go:117] "RemoveContainer" containerID="ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087" Jan 27 14:58:14 crc kubenswrapper[4833]: E0127 14:58:14.966243 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087\": container with ID starting with ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087 not found: ID does not exist" containerID="ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087" Jan 27 14:58:14 crc kubenswrapper[4833]: I0127 14:58:14.966287 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087"} err="failed to get container status \"ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087\": rpc error: code = NotFound desc = could not find container \"ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087\": container with ID starting with ca38212bdd213a98dc4de79082570c819cb51c7bec35fdd4260115a21db3c087 not found: ID does not exist" Jan 27 14:58:15 crc kubenswrapper[4833]: I0127 14:58:15.223788 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" path="/var/lib/kubelet/pods/b6fd2dd1-5c2a-421d-92ee-b12904f72f61/volumes" Jan 27 14:58:53 crc kubenswrapper[4833]: I0127 14:58:53.620815 4833 generic.go:334] "Generic (PLEG): container finished" podID="de21b54e-baa7-4329-afa1-44caba34567e" containerID="4f4e390dd248144ef91927b24e1b6868152a25c50ad1ac2d11d9c92c658f0ab3" exitCode=0 Jan 27 14:58:53 crc kubenswrapper[4833]: I0127 14:58:53.620877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" event={"ID":"de21b54e-baa7-4329-afa1-44caba34567e","Type":"ContainerDied","Data":"4f4e390dd248144ef91927b24e1b6868152a25c50ad1ac2d11d9c92c658f0ab3"} Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.143321 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.247922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-inventory\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.248084 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-1\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.248144 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-0\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.248162 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ssh-key-openstack-edpm-ipam\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.248763 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-2\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.248811 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6ffm\" (UniqueName: \"kubernetes.io/projected/de21b54e-baa7-4329-afa1-44caba34567e-kube-api-access-t6ffm\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.248860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-telemetry-combined-ca-bundle\") pod \"de21b54e-baa7-4329-afa1-44caba34567e\" (UID: \"de21b54e-baa7-4329-afa1-44caba34567e\") " Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.253985 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de21b54e-baa7-4329-afa1-44caba34567e-kube-api-access-t6ffm" (OuterVolumeSpecName: "kube-api-access-t6ffm") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "kube-api-access-t6ffm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.255262 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.280616 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.282493 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-inventory" (OuterVolumeSpecName: "inventory") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.282531 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.296977 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.305282 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "de21b54e-baa7-4329-afa1-44caba34567e" (UID: "de21b54e-baa7-4329-afa1-44caba34567e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351162 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351206 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351234 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351256 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351285 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351304 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6ffm\" (UniqueName: \"kubernetes.io/projected/de21b54e-baa7-4329-afa1-44caba34567e-kube-api-access-t6ffm\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.351323 4833 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de21b54e-baa7-4329-afa1-44caba34567e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.645730 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" event={"ID":"de21b54e-baa7-4329-afa1-44caba34567e","Type":"ContainerDied","Data":"264766a3178cd2e468f76f5056ed3ae5f159a5e3f3f3c2af838902fbc5e6c7a0"} Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.647836 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="264766a3178cd2e468f76f5056ed3ae5f159a5e3f3f3c2af838902fbc5e6c7a0" Jan 27 14:58:55 crc kubenswrapper[4833]: I0127 14:58:55.646043 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.008712 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q9752"] Jan 27 14:59:14 crc kubenswrapper[4833]: E0127 14:59:14.009623 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="extract-utilities" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.009714 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="extract-utilities" Jan 27 14:59:14 crc kubenswrapper[4833]: E0127 14:59:14.009734 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de21b54e-baa7-4329-afa1-44caba34567e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.009744 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="de21b54e-baa7-4329-afa1-44caba34567e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 14:59:14 crc kubenswrapper[4833]: E0127 14:59:14.009772 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="registry-server" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.009780 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="registry-server" Jan 27 14:59:14 crc kubenswrapper[4833]: E0127 14:59:14.009796 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="extract-content" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.009820 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="extract-content" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.010027 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="de21b54e-baa7-4329-afa1-44caba34567e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.010056 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6fd2dd1-5c2a-421d-92ee-b12904f72f61" containerName="registry-server" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.011506 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.056718 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9752"] Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.166998 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2f6\" (UniqueName: \"kubernetes.io/projected/64a826f0-3ded-4558-bacf-e74faa4018c9-kube-api-access-8q2f6\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.167069 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-utilities\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.167191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-catalog-content\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.269481 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-catalog-content\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.269661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q2f6\" (UniqueName: \"kubernetes.io/projected/64a826f0-3ded-4558-bacf-e74faa4018c9-kube-api-access-8q2f6\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.269714 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-utilities\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.269999 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-catalog-content\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.270099 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-utilities\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.289326 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q2f6\" (UniqueName: \"kubernetes.io/projected/64a826f0-3ded-4558-bacf-e74faa4018c9-kube-api-access-8q2f6\") pod \"community-operators-q9752\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.390593 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:14 crc kubenswrapper[4833]: I0127 14:59:14.936831 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q9752"] Jan 27 14:59:14 crc kubenswrapper[4833]: W0127 14:59:14.943081 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a826f0_3ded_4558_bacf_e74faa4018c9.slice/crio-f14c7fcb59039a8f70a46fb7b06967c9d6688bd0ca1ac2ab7168b9adafc05187 WatchSource:0}: Error finding container f14c7fcb59039a8f70a46fb7b06967c9d6688bd0ca1ac2ab7168b9adafc05187: Status 404 returned error can't find the container with id f14c7fcb59039a8f70a46fb7b06967c9d6688bd0ca1ac2ab7168b9adafc05187 Jan 27 14:59:15 crc kubenswrapper[4833]: I0127 14:59:15.881528 4833 generic.go:334] "Generic (PLEG): container finished" podID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerID="cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb" exitCode=0 Jan 27 14:59:15 crc kubenswrapper[4833]: I0127 14:59:15.881580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerDied","Data":"cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb"} Jan 27 14:59:15 crc kubenswrapper[4833]: I0127 14:59:15.881606 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerStarted","Data":"f14c7fcb59039a8f70a46fb7b06967c9d6688bd0ca1ac2ab7168b9adafc05187"} Jan 27 14:59:16 crc kubenswrapper[4833]: I0127 14:59:16.892352 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerStarted","Data":"f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a"} Jan 27 14:59:17 crc kubenswrapper[4833]: I0127 14:59:17.904904 4833 generic.go:334] "Generic (PLEG): container finished" podID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerID="f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a" exitCode=0 Jan 27 14:59:17 crc kubenswrapper[4833]: I0127 14:59:17.904973 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerDied","Data":"f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a"} Jan 27 14:59:18 crc kubenswrapper[4833]: I0127 14:59:18.915596 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerStarted","Data":"3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb"} Jan 27 14:59:18 crc kubenswrapper[4833]: I0127 14:59:18.959298 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q9752" podStartSLOduration=3.42965707 podStartE2EDuration="5.959238796s" podCreationTimestamp="2026-01-27 14:59:13 +0000 UTC" firstStartedPulling="2026-01-27 14:59:15.883646424 +0000 UTC m=+2857.534970826" lastFinishedPulling="2026-01-27 14:59:18.41322815 +0000 UTC m=+2860.064552552" observedRunningTime="2026-01-27 14:59:18.950776585 +0000 UTC m=+2860.602100987" watchObservedRunningTime="2026-01-27 14:59:18.959238796 +0000 UTC m=+2860.610563198" Jan 27 14:59:24 crc kubenswrapper[4833]: I0127 14:59:24.391570 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:24 crc kubenswrapper[4833]: I0127 14:59:24.392262 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:24 crc kubenswrapper[4833]: I0127 14:59:24.481140 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:25 crc kubenswrapper[4833]: I0127 14:59:25.043875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:25 crc kubenswrapper[4833]: I0127 14:59:25.094768 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9752"] Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.018929 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q9752" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="registry-server" containerID="cri-o://3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb" gracePeriod=2 Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.738103 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.850035 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q2f6\" (UniqueName: \"kubernetes.io/projected/64a826f0-3ded-4558-bacf-e74faa4018c9-kube-api-access-8q2f6\") pod \"64a826f0-3ded-4558-bacf-e74faa4018c9\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.850230 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-catalog-content\") pod \"64a826f0-3ded-4558-bacf-e74faa4018c9\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.850407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-utilities\") pod \"64a826f0-3ded-4558-bacf-e74faa4018c9\" (UID: \"64a826f0-3ded-4558-bacf-e74faa4018c9\") " Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.851561 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-utilities" (OuterVolumeSpecName: "utilities") pod "64a826f0-3ded-4558-bacf-e74faa4018c9" (UID: "64a826f0-3ded-4558-bacf-e74faa4018c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.861681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a826f0-3ded-4558-bacf-e74faa4018c9-kube-api-access-8q2f6" (OuterVolumeSpecName: "kube-api-access-8q2f6") pod "64a826f0-3ded-4558-bacf-e74faa4018c9" (UID: "64a826f0-3ded-4558-bacf-e74faa4018c9"). InnerVolumeSpecName "kube-api-access-8q2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.952674 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.952712 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q2f6\" (UniqueName: \"kubernetes.io/projected/64a826f0-3ded-4558-bacf-e74faa4018c9-kube-api-access-8q2f6\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:27 crc kubenswrapper[4833]: I0127 14:59:27.954143 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64a826f0-3ded-4558-bacf-e74faa4018c9" (UID: "64a826f0-3ded-4558-bacf-e74faa4018c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.028076 4833 generic.go:334] "Generic (PLEG): container finished" podID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerID="3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb" exitCode=0 Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.028120 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerDied","Data":"3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb"} Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.028144 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q9752" event={"ID":"64a826f0-3ded-4558-bacf-e74faa4018c9","Type":"ContainerDied","Data":"f14c7fcb59039a8f70a46fb7b06967c9d6688bd0ca1ac2ab7168b9adafc05187"} Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.028160 4833 scope.go:117] "RemoveContainer" containerID="3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.028284 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q9752" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.049879 4833 scope.go:117] "RemoveContainer" containerID="f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.055947 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64a826f0-3ded-4558-bacf-e74faa4018c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.065690 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q9752"] Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.081363 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q9752"] Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.085324 4833 scope.go:117] "RemoveContainer" containerID="cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.116066 4833 scope.go:117] "RemoveContainer" containerID="3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb" Jan 27 14:59:28 crc kubenswrapper[4833]: E0127 14:59:28.116769 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb\": container with ID starting with 3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb not found: ID does not exist" containerID="3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.116942 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb"} err="failed to get container status \"3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb\": rpc error: code = NotFound desc = could not find container \"3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb\": container with ID starting with 3386ab1a970c167ceb5821ce39cce34dbb4cfd0a99e91299868e7aa77dcedacb not found: ID does not exist" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.117124 4833 scope.go:117] "RemoveContainer" containerID="f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a" Jan 27 14:59:28 crc kubenswrapper[4833]: E0127 14:59:28.117657 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a\": container with ID starting with f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a not found: ID does not exist" containerID="f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.117695 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a"} err="failed to get container status \"f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a\": rpc error: code = NotFound desc = could not find container \"f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a\": container with ID starting with f57e3fb9f1af2c9972d255b71a09ffc26e9dd1f86b3da9f9f9b4444606475a3a not found: ID does not exist" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.117723 4833 scope.go:117] "RemoveContainer" containerID="cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb" Jan 27 14:59:28 crc kubenswrapper[4833]: E0127 14:59:28.118059 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb\": container with ID starting with cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb not found: ID does not exist" containerID="cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb" Jan 27 14:59:28 crc kubenswrapper[4833]: I0127 14:59:28.118114 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb"} err="failed to get container status \"cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb\": rpc error: code = NotFound desc = could not find container \"cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb\": container with ID starting with cc4b966e3eb336460944d1a5ba2881fd07368246787de231ee1b02f50902d5bb not found: ID does not exist" Jan 27 14:59:29 crc kubenswrapper[4833]: I0127 14:59:29.229548 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" path="/var/lib/kubelet/pods/64a826f0-3ded-4558-bacf-e74faa4018c9/volumes" Jan 27 14:59:40 crc kubenswrapper[4833]: I0127 14:59:40.676332 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:59:40 crc kubenswrapper[4833]: I0127 14:59:40.677274 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="prometheus" containerID="cri-o://26c8f3a910042695e917df206850fa5c7ac9165185789434f9ec79f7b3aaae7b" gracePeriod=600 Jan 27 14:59:40 crc kubenswrapper[4833]: I0127 14:59:40.677364 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="thanos-sidecar" containerID="cri-o://fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa" gracePeriod=600 Jan 27 14:59:40 crc kubenswrapper[4833]: I0127 14:59:40.677408 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="config-reloader" containerID="cri-o://52b7c750f2cee3c45c2e0ed7185f0e745182a58a8a9d2392c410103b133df09c" gracePeriod=600 Jan 27 14:59:40 crc kubenswrapper[4833]: E0127 14:59:40.760407 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9498c3c5_aa3d_400f_9970_7aa3388688a3.slice/crio-conmon-fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9498c3c5_aa3d_400f_9970_7aa3388688a3.slice/crio-fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.175755 4833 generic.go:334] "Generic (PLEG): container finished" podID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerID="fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa" exitCode=0 Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.176023 4833 generic.go:334] "Generic (PLEG): container finished" podID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerID="52b7c750f2cee3c45c2e0ed7185f0e745182a58a8a9d2392c410103b133df09c" exitCode=0 Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.176034 4833 generic.go:334] "Generic (PLEG): container finished" podID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerID="26c8f3a910042695e917df206850fa5c7ac9165185789434f9ec79f7b3aaae7b" exitCode=0 Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.175852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerDied","Data":"fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa"} Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.176078 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerDied","Data":"52b7c750f2cee3c45c2e0ed7185f0e745182a58a8a9d2392c410103b133df09c"} Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.176097 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerDied","Data":"26c8f3a910042695e917df206850fa5c7ac9165185789434f9ec79f7b3aaae7b"} Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.645847 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.739883 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9kpn\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-kube-api-access-p9kpn\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740203 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-1\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740303 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-tls-assets\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740400 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-secret-combined-ca-bundle\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740430 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-0\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740553 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740592 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-config\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740645 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9498c3c5-aa3d-400f-9970-7aa3388688a3-config-out\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740675 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-thanos-prometheus-http-client-file\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740780 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-2\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740892 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.740924 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"9498c3c5-aa3d-400f-9970-7aa3388688a3\" (UID: \"9498c3c5-aa3d-400f-9970-7aa3388688a3\") " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.741141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.741499 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.743756 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.744084 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.746155 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-kube-api-access-p9kpn" (OuterVolumeSpecName: "kube-api-access-p9kpn") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "kube-api-access-p9kpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.746763 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.763174 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9498c3c5-aa3d-400f-9970-7aa3388688a3-config-out" (OuterVolumeSpecName: "config-out") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.765288 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-config" (OuterVolumeSpecName: "config") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.765734 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.767218 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.784609 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.785458 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.808604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "pvc-fe2666da-d978-4526-ad6b-c7fb563ec194". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844141 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844189 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") on node \"crc\" " Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844206 4833 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844259 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9kpn\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-kube-api-access-p9kpn\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844275 4833 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9498c3c5-aa3d-400f-9970-7aa3388688a3-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844292 4833 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844309 4833 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9498c3c5-aa3d-400f-9970-7aa3388688a3-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844329 4833 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844345 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844360 4833 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9498c3c5-aa3d-400f-9970-7aa3388688a3-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.844377 4833 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.846810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config" (OuterVolumeSpecName: "web-config") pod "9498c3c5-aa3d-400f-9970-7aa3388688a3" (UID: "9498c3c5-aa3d-400f-9970-7aa3388688a3"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.876331 4833 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.876498 4833 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-fe2666da-d978-4526-ad6b-c7fb563ec194" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194") on node "crc" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.945947 4833 reconciler_common.go:293] "Volume detached for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:41 crc kubenswrapper[4833]: I0127 14:59:41.945990 4833 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9498c3c5-aa3d-400f-9970-7aa3388688a3-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.191779 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9498c3c5-aa3d-400f-9970-7aa3388688a3","Type":"ContainerDied","Data":"f8f00dae01bab0a1fa6b804680d249ffae0492e3c641febe89f8c29149e8d636"} Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.191875 4833 scope.go:117] "RemoveContainer" containerID="fff536ce61eb39a5e6bd768592a99c7bf3323efc22d6ae2dcfc17237607cbbfa" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.191889 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.226690 4833 scope.go:117] "RemoveContainer" containerID="52b7c750f2cee3c45c2e0ed7185f0e745182a58a8a9d2392c410103b133df09c" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.246905 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.260713 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.266385 4833 scope.go:117] "RemoveContainer" containerID="26c8f3a910042695e917df206850fa5c7ac9165185789434f9ec79f7b3aaae7b" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.276599 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277027 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="prometheus" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277048 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="prometheus" Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277063 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="config-reloader" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277069 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="config-reloader" Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277082 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="thanos-sidecar" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277090 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="thanos-sidecar" Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277113 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="extract-content" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277119 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="extract-content" Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277134 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="extract-utilities" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277140 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="extract-utilities" Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277151 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="registry-server" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277158 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="registry-server" Jan 27 14:59:42 crc kubenswrapper[4833]: E0127 14:59:42.277169 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="init-config-reloader" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277175 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="init-config-reloader" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277367 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="config-reloader" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277387 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="prometheus" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277406 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="thanos-sidecar" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.277419 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a826f0-3ded-4558-bacf-e74faa4018c9" containerName="registry-server" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.279591 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.290895 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.299518 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.300659 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.300727 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.300757 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.300880 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-r6fq9" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.300953 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.301969 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.303429 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.311566 4833 scope.go:117] "RemoveContainer" containerID="7c8a8be740230c85d90549b48f71c460aaf0d65223613733222524b2d00d9f7d" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356471 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59tkn\" (UniqueName: \"kubernetes.io/projected/9898dddd-efe1-4386-a386-723cd4e3b1e9-kube-api-access-59tkn\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356508 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356558 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-config\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356587 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9898dddd-efe1-4386-a386-723cd4e3b1e9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356683 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356704 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356721 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9898dddd-efe1-4386-a386-723cd4e3b1e9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356855 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.356901 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458488 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9898dddd-efe1-4386-a386-723cd4e3b1e9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458592 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458619 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458652 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458735 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59tkn\" (UniqueName: \"kubernetes.io/projected/9898dddd-efe1-4386-a386-723cd4e3b1e9-kube-api-access-59tkn\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.458896 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.459001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-config\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.459099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.459238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9898dddd-efe1-4386-a386-723cd4e3b1e9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.459688 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.460129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.460295 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9898dddd-efe1-4386-a386-723cd4e3b1e9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.462966 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9898dddd-efe1-4386-a386-723cd4e3b1e9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.463153 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.463228 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae037006a250a748df6e15e9e2e300ecef710dd9481d48fc8efc4ea8fd9ab428/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.463360 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.464375 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.465159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.466221 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.466374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9898dddd-efe1-4386-a386-723cd4e3b1e9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.470950 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.477167 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9898dddd-efe1-4386-a386-723cd4e3b1e9-config\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.481816 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59tkn\" (UniqueName: \"kubernetes.io/projected/9898dddd-efe1-4386-a386-723cd4e3b1e9-kube-api-access-59tkn\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.513309 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe2666da-d978-4526-ad6b-c7fb563ec194\") pod \"prometheus-metric-storage-0\" (UID: \"9898dddd-efe1-4386-a386-723cd4e3b1e9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:42 crc kubenswrapper[4833]: I0127 14:59:42.606297 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:59:43 crc kubenswrapper[4833]: I0127 14:59:43.133086 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:59:43 crc kubenswrapper[4833]: I0127 14:59:43.203869 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9898dddd-efe1-4386-a386-723cd4e3b1e9","Type":"ContainerStarted","Data":"5850abc907165dbea8c6501b6f3100682ec284db705a4197333452ff29fd12d5"} Jan 27 14:59:43 crc kubenswrapper[4833]: I0127 14:59:43.247571 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" path="/var/lib/kubelet/pods/9498c3c5-aa3d-400f-9970-7aa3388688a3/volumes" Jan 27 14:59:44 crc kubenswrapper[4833]: I0127 14:59:44.422050 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9498c3c5-aa3d-400f-9970-7aa3388688a3" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.144:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:59:49 crc kubenswrapper[4833]: I0127 14:59:49.308974 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9898dddd-efe1-4386-a386-723cd4e3b1e9","Type":"ContainerStarted","Data":"f0d9d9cc2db52e67751fe772fe5576e73de985281cc1a0872b9553fe39bd594a"} Jan 27 14:59:56 crc kubenswrapper[4833]: I0127 14:59:56.376406 4833 generic.go:334] "Generic (PLEG): container finished" podID="9898dddd-efe1-4386-a386-723cd4e3b1e9" containerID="f0d9d9cc2db52e67751fe772fe5576e73de985281cc1a0872b9553fe39bd594a" exitCode=0 Jan 27 14:59:56 crc kubenswrapper[4833]: I0127 14:59:56.376491 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9898dddd-efe1-4386-a386-723cd4e3b1e9","Type":"ContainerDied","Data":"f0d9d9cc2db52e67751fe772fe5576e73de985281cc1a0872b9553fe39bd594a"} Jan 27 14:59:57 crc kubenswrapper[4833]: I0127 14:59:57.396365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9898dddd-efe1-4386-a386-723cd4e3b1e9","Type":"ContainerStarted","Data":"b52d0a1394aecc73a1df5e02d4718ea2f57d427ee6b24998a5f8bc958c645596"} Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.184891 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj"] Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.187861 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.190617 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.191634 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.199064 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj"] Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.325371 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscnq\" (UniqueName: \"kubernetes.io/projected/b0d2f66f-a552-4e92-8270-db03275d821f-kube-api-access-pscnq\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.325424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d2f66f-a552-4e92-8270-db03275d821f-config-volume\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.326820 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d2f66f-a552-4e92-8270-db03275d821f-secret-volume\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.428773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d2f66f-a552-4e92-8270-db03275d821f-secret-volume\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.428886 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pscnq\" (UniqueName: \"kubernetes.io/projected/b0d2f66f-a552-4e92-8270-db03275d821f-kube-api-access-pscnq\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.428914 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d2f66f-a552-4e92-8270-db03275d821f-config-volume\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.429772 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d2f66f-a552-4e92-8270-db03275d821f-config-volume\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.435256 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d2f66f-a552-4e92-8270-db03275d821f-secret-volume\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.446803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pscnq\" (UniqueName: \"kubernetes.io/projected/b0d2f66f-a552-4e92-8270-db03275d821f-kube-api-access-pscnq\") pod \"collect-profiles-29492100-nzqrj\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:00 crc kubenswrapper[4833]: I0127 15:00:00.522245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:01 crc kubenswrapper[4833]: W0127 15:00:01.041625 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d2f66f_a552_4e92_8270_db03275d821f.slice/crio-81cbd4b0ecb2078eafffba66c46dae0dea352f82284e6ddd1b83a7800a4cb6bf WatchSource:0}: Error finding container 81cbd4b0ecb2078eafffba66c46dae0dea352f82284e6ddd1b83a7800a4cb6bf: Status 404 returned error can't find the container with id 81cbd4b0ecb2078eafffba66c46dae0dea352f82284e6ddd1b83a7800a4cb6bf Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.047846 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj"] Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.441056 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" event={"ID":"b0d2f66f-a552-4e92-8270-db03275d821f","Type":"ContainerStarted","Data":"4d85fba9714ebb36c5535cb56d46eeae3ffe3844b3775eead3b73dd5c5a4a7ac"} Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.442464 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" event={"ID":"b0d2f66f-a552-4e92-8270-db03275d821f","Type":"ContainerStarted","Data":"81cbd4b0ecb2078eafffba66c46dae0dea352f82284e6ddd1b83a7800a4cb6bf"} Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.446328 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9898dddd-efe1-4386-a386-723cd4e3b1e9","Type":"ContainerStarted","Data":"99a97fe148475e933c83b574cb394f3de3eb545190b2d2400f1c27f705caaf87"} Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.446388 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9898dddd-efe1-4386-a386-723cd4e3b1e9","Type":"ContainerStarted","Data":"42c58d5644c37c5fe3412bd129e6eab1b190860013a5cf5bf7502c4634f02234"} Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.462595 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" podStartSLOduration=1.462576256 podStartE2EDuration="1.462576256s" podCreationTimestamp="2026-01-27 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:00:01.4610916 +0000 UTC m=+2903.112416002" watchObservedRunningTime="2026-01-27 15:00:01.462576256 +0000 UTC m=+2903.113900658" Jan 27 15:00:01 crc kubenswrapper[4833]: I0127 15:00:01.496712 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.496693245 podStartE2EDuration="19.496693245s" podCreationTimestamp="2026-01-27 14:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:00:01.489068835 +0000 UTC m=+2903.140393267" watchObservedRunningTime="2026-01-27 15:00:01.496693245 +0000 UTC m=+2903.148017647" Jan 27 15:00:02 crc kubenswrapper[4833]: I0127 15:00:02.262980 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:00:02 crc kubenswrapper[4833]: I0127 15:00:02.263392 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:00:02 crc kubenswrapper[4833]: I0127 15:00:02.455210 4833 generic.go:334] "Generic (PLEG): container finished" podID="b0d2f66f-a552-4e92-8270-db03275d821f" containerID="4d85fba9714ebb36c5535cb56d46eeae3ffe3844b3775eead3b73dd5c5a4a7ac" exitCode=0 Jan 27 15:00:02 crc kubenswrapper[4833]: I0127 15:00:02.455264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" event={"ID":"b0d2f66f-a552-4e92-8270-db03275d821f","Type":"ContainerDied","Data":"4d85fba9714ebb36c5535cb56d46eeae3ffe3844b3775eead3b73dd5c5a4a7ac"} Jan 27 15:00:02 crc kubenswrapper[4833]: I0127 15:00:02.606983 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 15:00:03 crc kubenswrapper[4833]: I0127 15:00:03.874738 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.018148 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pscnq\" (UniqueName: \"kubernetes.io/projected/b0d2f66f-a552-4e92-8270-db03275d821f-kube-api-access-pscnq\") pod \"b0d2f66f-a552-4e92-8270-db03275d821f\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.018283 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d2f66f-a552-4e92-8270-db03275d821f-secret-volume\") pod \"b0d2f66f-a552-4e92-8270-db03275d821f\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.018364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d2f66f-a552-4e92-8270-db03275d821f-config-volume\") pod \"b0d2f66f-a552-4e92-8270-db03275d821f\" (UID: \"b0d2f66f-a552-4e92-8270-db03275d821f\") " Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.019217 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d2f66f-a552-4e92-8270-db03275d821f-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0d2f66f-a552-4e92-8270-db03275d821f" (UID: "b0d2f66f-a552-4e92-8270-db03275d821f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.020083 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d2f66f-a552-4e92-8270-db03275d821f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.029700 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d2f66f-a552-4e92-8270-db03275d821f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b0d2f66f-a552-4e92-8270-db03275d821f" (UID: "b0d2f66f-a552-4e92-8270-db03275d821f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.031884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d2f66f-a552-4e92-8270-db03275d821f-kube-api-access-pscnq" (OuterVolumeSpecName: "kube-api-access-pscnq") pod "b0d2f66f-a552-4e92-8270-db03275d821f" (UID: "b0d2f66f-a552-4e92-8270-db03275d821f"). InnerVolumeSpecName "kube-api-access-pscnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.122887 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pscnq\" (UniqueName: \"kubernetes.io/projected/b0d2f66f-a552-4e92-8270-db03275d821f-kube-api-access-pscnq\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.122946 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0d2f66f-a552-4e92-8270-db03275d821f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.488343 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.488582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj" event={"ID":"b0d2f66f-a552-4e92-8270-db03275d821f","Type":"ContainerDied","Data":"81cbd4b0ecb2078eafffba66c46dae0dea352f82284e6ddd1b83a7800a4cb6bf"} Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.488626 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81cbd4b0ecb2078eafffba66c46dae0dea352f82284e6ddd1b83a7800a4cb6bf" Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.549408 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd"] Jan 27 15:00:04 crc kubenswrapper[4833]: I0127 15:00:04.559402 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-jhhfd"] Jan 27 15:00:05 crc kubenswrapper[4833]: I0127 15:00:05.228324 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1fb855-4475-4e85-891a-6fb0e60b1666" path="/var/lib/kubelet/pods/3c1fb855-4475-4e85-891a-6fb0e60b1666/volumes" Jan 27 15:00:12 crc kubenswrapper[4833]: I0127 15:00:12.607664 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 15:00:12 crc kubenswrapper[4833]: I0127 15:00:12.614967 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 15:00:13 crc kubenswrapper[4833]: I0127 15:00:13.592678 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.183938 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 15:00:23 crc kubenswrapper[4833]: E0127 15:00:23.184995 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d2f66f-a552-4e92-8270-db03275d821f" containerName="collect-profiles" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.185013 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d2f66f-a552-4e92-8270-db03275d821f" containerName="collect-profiles" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.185243 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d2f66f-a552-4e92-8270-db03275d821f" containerName="collect-profiles" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.186128 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.188461 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.188575 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.189159 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.190640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bvjzm" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.200748 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.230036 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.230400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.230556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2r76\" (UniqueName: \"kubernetes.io/projected/5faf0556-c6da-4c93-9ce7-f02ed716c092-kube-api-access-g2r76\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.230720 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-config-data\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.230904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.231085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.231233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.231361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.231496 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2r76\" (UniqueName: \"kubernetes.io/projected/5faf0556-c6da-4c93-9ce7-f02ed716c092-kube-api-access-g2r76\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334401 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-config-data\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334466 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334520 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334546 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.334570 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.335112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.335556 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.335781 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.335796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.336851 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-config-data\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.344791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.348095 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.348092 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.358258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2r76\" (UniqueName: \"kubernetes.io/projected/5faf0556-c6da-4c93-9ce7-f02ed716c092-kube-api-access-g2r76\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.375305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " pod="openstack/tempest-tests-tempest" Jan 27 15:00:23 crc kubenswrapper[4833]: I0127 15:00:23.510964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:00:24 crc kubenswrapper[4833]: I0127 15:00:24.027129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 15:00:24 crc kubenswrapper[4833]: I0127 15:00:24.700353 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5faf0556-c6da-4c93-9ce7-f02ed716c092","Type":"ContainerStarted","Data":"d0096a610a6c73ab5149d37e54f391d63865b5ab11cb0cfd225a7f5a0292882f"} Jan 27 15:00:32 crc kubenswrapper[4833]: I0127 15:00:32.260220 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:00:32 crc kubenswrapper[4833]: I0127 15:00:32.260941 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:00:37 crc kubenswrapper[4833]: E0127 15:00:37.718762 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Jan 27 15:00:37 crc kubenswrapper[4833]: E0127 15:00:37.719838 4833 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest" Jan 27 15:00:37 crc kubenswrapper[4833]: E0127 15:00:37.720176 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.22:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2r76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5faf0556-c6da-4c93-9ce7-f02ed716c092): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:00:37 crc kubenswrapper[4833]: E0127 15:00:37.721635 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5faf0556-c6da-4c93-9ce7-f02ed716c092" Jan 27 15:00:37 crc kubenswrapper[4833]: E0127 15:00:37.878013 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/podified-epoxy-centos9/openstack-tempest-all:watcher_latest\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5faf0556-c6da-4c93-9ce7-f02ed716c092" Jan 27 15:00:49 crc kubenswrapper[4833]: I0127 15:00:49.307428 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 15:00:51 crc kubenswrapper[4833]: I0127 15:00:51.060038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5faf0556-c6da-4c93-9ce7-f02ed716c092","Type":"ContainerStarted","Data":"816877fbc5e07b2a5a0ff947761635c08a36ab7703cfeb21646f99b3061cea93"} Jan 27 15:00:51 crc kubenswrapper[4833]: I0127 15:00:51.088077 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.8101694249999998 podStartE2EDuration="29.088059824s" podCreationTimestamp="2026-01-27 15:00:22 +0000 UTC" firstStartedPulling="2026-01-27 15:00:24.025531828 +0000 UTC m=+2925.676856260" lastFinishedPulling="2026-01-27 15:00:49.303422237 +0000 UTC m=+2950.954746659" observedRunningTime="2026-01-27 15:00:51.079764817 +0000 UTC m=+2952.731089239" watchObservedRunningTime="2026-01-27 15:00:51.088059824 +0000 UTC m=+2952.739384226" Jan 27 15:00:56 crc kubenswrapper[4833]: I0127 15:00:56.155174 4833 scope.go:117] "RemoveContainer" containerID="2371cf77a3863b0cf1a41af23ac07dc155201e9d269db7008fb1bdb5f9731388" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.176308 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492101-zch2v"] Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.177964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.185168 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492101-zch2v"] Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.252671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlxw\" (UniqueName: \"kubernetes.io/projected/4186bda5-641e-4266-8207-2618451a497e-kube-api-access-sjlxw\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.252745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-config-data\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.252799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-fernet-keys\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.252815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-combined-ca-bundle\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.354907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlxw\" (UniqueName: \"kubernetes.io/projected/4186bda5-641e-4266-8207-2618451a497e-kube-api-access-sjlxw\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.354996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-config-data\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.355052 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-fernet-keys\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.355075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-combined-ca-bundle\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.362186 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-fernet-keys\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.362698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-config-data\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.362868 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-combined-ca-bundle\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.372884 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlxw\" (UniqueName: \"kubernetes.io/projected/4186bda5-641e-4266-8207-2618451a497e-kube-api-access-sjlxw\") pod \"keystone-cron-29492101-zch2v\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:00 crc kubenswrapper[4833]: I0127 15:01:00.505557 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:01 crc kubenswrapper[4833]: I0127 15:01:01.004263 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492101-zch2v"] Jan 27 15:01:01 crc kubenswrapper[4833]: I0127 15:01:01.202354 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-zch2v" event={"ID":"4186bda5-641e-4266-8207-2618451a497e","Type":"ContainerStarted","Data":"8ecef54e6a4e5dadd3b0952c53d5f863f5d31206e9b3abd4c5485f7b16e7ec50"} Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.231262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-zch2v" event={"ID":"4186bda5-641e-4266-8207-2618451a497e","Type":"ContainerStarted","Data":"5cd33e9f71ca843e3b1747b6333e28c94b8d742ed629a59532206e04fe982333"} Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.260200 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.260251 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.260292 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.261062 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.261118 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" gracePeriod=600 Jan 27 15:01:02 crc kubenswrapper[4833]: I0127 15:01:02.269158 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492101-zch2v" podStartSLOduration=2.269134656 podStartE2EDuration="2.269134656s" podCreationTimestamp="2026-01-27 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:01:02.251218561 +0000 UTC m=+2963.902542963" watchObservedRunningTime="2026-01-27 15:01:02.269134656 +0000 UTC m=+2963.920459078" Jan 27 15:01:02 crc kubenswrapper[4833]: E0127 15:01:02.396964 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:01:03 crc kubenswrapper[4833]: I0127 15:01:03.249250 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" exitCode=0 Jan 27 15:01:03 crc kubenswrapper[4833]: I0127 15:01:03.249328 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862"} Jan 27 15:01:03 crc kubenswrapper[4833]: I0127 15:01:03.249790 4833 scope.go:117] "RemoveContainer" containerID="278197ced7e604bd2cdde46a41995a5b9d2b4edbcbbe51f8760afc35b02a8139" Jan 27 15:01:03 crc kubenswrapper[4833]: I0127 15:01:03.250935 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:01:03 crc kubenswrapper[4833]: E0127 15:01:03.251597 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:01:04 crc kubenswrapper[4833]: I0127 15:01:04.263411 4833 generic.go:334] "Generic (PLEG): container finished" podID="4186bda5-641e-4266-8207-2618451a497e" containerID="5cd33e9f71ca843e3b1747b6333e28c94b8d742ed629a59532206e04fe982333" exitCode=0 Jan 27 15:01:04 crc kubenswrapper[4833]: I0127 15:01:04.263605 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-zch2v" event={"ID":"4186bda5-641e-4266-8207-2618451a497e","Type":"ContainerDied","Data":"5cd33e9f71ca843e3b1747b6333e28c94b8d742ed629a59532206e04fe982333"} Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.620882 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.690700 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjlxw\" (UniqueName: \"kubernetes.io/projected/4186bda5-641e-4266-8207-2618451a497e-kube-api-access-sjlxw\") pod \"4186bda5-641e-4266-8207-2618451a497e\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.690848 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-config-data\") pod \"4186bda5-641e-4266-8207-2618451a497e\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.690944 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-combined-ca-bundle\") pod \"4186bda5-641e-4266-8207-2618451a497e\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.690999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-fernet-keys\") pod \"4186bda5-641e-4266-8207-2618451a497e\" (UID: \"4186bda5-641e-4266-8207-2618451a497e\") " Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.701664 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4186bda5-641e-4266-8207-2618451a497e-kube-api-access-sjlxw" (OuterVolumeSpecName: "kube-api-access-sjlxw") pod "4186bda5-641e-4266-8207-2618451a497e" (UID: "4186bda5-641e-4266-8207-2618451a497e"). InnerVolumeSpecName "kube-api-access-sjlxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.705601 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4186bda5-641e-4266-8207-2618451a497e" (UID: "4186bda5-641e-4266-8207-2618451a497e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.822999 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.823043 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjlxw\" (UniqueName: \"kubernetes.io/projected/4186bda5-641e-4266-8207-2618451a497e-kube-api-access-sjlxw\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.828623 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4186bda5-641e-4266-8207-2618451a497e" (UID: "4186bda5-641e-4266-8207-2618451a497e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.829111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-config-data" (OuterVolumeSpecName: "config-data") pod "4186bda5-641e-4266-8207-2618451a497e" (UID: "4186bda5-641e-4266-8207-2618451a497e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.924980 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:05 crc kubenswrapper[4833]: I0127 15:01:05.925248 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4186bda5-641e-4266-8207-2618451a497e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:06 crc kubenswrapper[4833]: I0127 15:01:06.288306 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-zch2v" event={"ID":"4186bda5-641e-4266-8207-2618451a497e","Type":"ContainerDied","Data":"8ecef54e6a4e5dadd3b0952c53d5f863f5d31206e9b3abd4c5485f7b16e7ec50"} Jan 27 15:01:06 crc kubenswrapper[4833]: I0127 15:01:06.288345 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ecef54e6a4e5dadd3b0952c53d5f863f5d31206e9b3abd4c5485f7b16e7ec50" Jan 27 15:01:06 crc kubenswrapper[4833]: I0127 15:01:06.288984 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-zch2v" Jan 27 15:01:17 crc kubenswrapper[4833]: I0127 15:01:17.210733 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:01:17 crc kubenswrapper[4833]: E0127 15:01:17.211691 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:01:29 crc kubenswrapper[4833]: I0127 15:01:29.220036 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:01:29 crc kubenswrapper[4833]: E0127 15:01:29.221067 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:01:44 crc kubenswrapper[4833]: I0127 15:01:44.211250 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:01:44 crc kubenswrapper[4833]: E0127 15:01:44.212743 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:01:57 crc kubenswrapper[4833]: I0127 15:01:57.211632 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:01:57 crc kubenswrapper[4833]: E0127 15:01:57.213989 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.306678 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qh9gn"] Jan 27 15:02:02 crc kubenswrapper[4833]: E0127 15:02:02.309219 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4186bda5-641e-4266-8207-2618451a497e" containerName="keystone-cron" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.309368 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4186bda5-641e-4266-8207-2618451a497e" containerName="keystone-cron" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.309786 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4186bda5-641e-4266-8207-2618451a497e" containerName="keystone-cron" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.312020 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.319815 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh9gn"] Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.387741 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-catalog-content\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.388013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-utilities\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.388128 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvbr\" (UniqueName: \"kubernetes.io/projected/9a89c691-6f60-4aa1-a606-44d7d45e1446-kube-api-access-vrvbr\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.490374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-catalog-content\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.490419 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-utilities\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.490481 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrvbr\" (UniqueName: \"kubernetes.io/projected/9a89c691-6f60-4aa1-a606-44d7d45e1446-kube-api-access-vrvbr\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.491346 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-catalog-content\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.492194 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-utilities\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.509917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrvbr\" (UniqueName: \"kubernetes.io/projected/9a89c691-6f60-4aa1-a606-44d7d45e1446-kube-api-access-vrvbr\") pod \"redhat-marketplace-qh9gn\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:02 crc kubenswrapper[4833]: I0127 15:02:02.644363 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:03 crc kubenswrapper[4833]: I0127 15:02:03.129425 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh9gn"] Jan 27 15:02:03 crc kubenswrapper[4833]: I0127 15:02:03.947082 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerID="59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63" exitCode=0 Jan 27 15:02:03 crc kubenswrapper[4833]: I0127 15:02:03.947383 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerDied","Data":"59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63"} Jan 27 15:02:03 crc kubenswrapper[4833]: I0127 15:02:03.947476 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerStarted","Data":"d13b65c6a8a67fccc930c984d57e7a05ab0f090a6bf2cc05a27fb51b7bfedfed"} Jan 27 15:02:03 crc kubenswrapper[4833]: I0127 15:02:03.950539 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:02:05 crc kubenswrapper[4833]: I0127 15:02:05.974472 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerStarted","Data":"b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f"} Jan 27 15:02:06 crc kubenswrapper[4833]: I0127 15:02:06.988403 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerID="b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f" exitCode=0 Jan 27 15:02:06 crc kubenswrapper[4833]: I0127 15:02:06.988509 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerDied","Data":"b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f"} Jan 27 15:02:08 crc kubenswrapper[4833]: I0127 15:02:08.001042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerStarted","Data":"1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e"} Jan 27 15:02:08 crc kubenswrapper[4833]: I0127 15:02:08.029594 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qh9gn" podStartSLOduration=2.577134603 podStartE2EDuration="6.029576599s" podCreationTimestamp="2026-01-27 15:02:02 +0000 UTC" firstStartedPulling="2026-01-27 15:02:03.950243246 +0000 UTC m=+3025.601567658" lastFinishedPulling="2026-01-27 15:02:07.402685242 +0000 UTC m=+3029.054009654" observedRunningTime="2026-01-27 15:02:08.024826892 +0000 UTC m=+3029.676151294" watchObservedRunningTime="2026-01-27 15:02:08.029576599 +0000 UTC m=+3029.680901001" Jan 27 15:02:09 crc kubenswrapper[4833]: I0127 15:02:09.222233 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:02:09 crc kubenswrapper[4833]: E0127 15:02:09.223055 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:02:12 crc kubenswrapper[4833]: I0127 15:02:12.644489 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:12 crc kubenswrapper[4833]: I0127 15:02:12.644775 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:13 crc kubenswrapper[4833]: I0127 15:02:13.692537 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qh9gn" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="registry-server" probeResult="failure" output=< Jan 27 15:02:13 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 15:02:13 crc kubenswrapper[4833]: > Jan 27 15:02:22 crc kubenswrapper[4833]: I0127 15:02:22.729567 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:22 crc kubenswrapper[4833]: I0127 15:02:22.815701 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:22 crc kubenswrapper[4833]: I0127 15:02:22.987347 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh9gn"] Jan 27 15:02:23 crc kubenswrapper[4833]: I0127 15:02:23.210651 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:02:23 crc kubenswrapper[4833]: E0127 15:02:23.211226 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.167819 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qh9gn" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="registry-server" containerID="cri-o://1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e" gracePeriod=2 Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.696243 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.804945 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-catalog-content\") pod \"9a89c691-6f60-4aa1-a606-44d7d45e1446\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.805011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-utilities\") pod \"9a89c691-6f60-4aa1-a606-44d7d45e1446\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.805159 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrvbr\" (UniqueName: \"kubernetes.io/projected/9a89c691-6f60-4aa1-a606-44d7d45e1446-kube-api-access-vrvbr\") pod \"9a89c691-6f60-4aa1-a606-44d7d45e1446\" (UID: \"9a89c691-6f60-4aa1-a606-44d7d45e1446\") " Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.806034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-utilities" (OuterVolumeSpecName: "utilities") pod "9a89c691-6f60-4aa1-a606-44d7d45e1446" (UID: "9a89c691-6f60-4aa1-a606-44d7d45e1446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.811598 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a89c691-6f60-4aa1-a606-44d7d45e1446-kube-api-access-vrvbr" (OuterVolumeSpecName: "kube-api-access-vrvbr") pod "9a89c691-6f60-4aa1-a606-44d7d45e1446" (UID: "9a89c691-6f60-4aa1-a606-44d7d45e1446"). InnerVolumeSpecName "kube-api-access-vrvbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.839420 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a89c691-6f60-4aa1-a606-44d7d45e1446" (UID: "9a89c691-6f60-4aa1-a606-44d7d45e1446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.907744 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.907797 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a89c691-6f60-4aa1-a606-44d7d45e1446-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:02:24 crc kubenswrapper[4833]: I0127 15:02:24.907812 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrvbr\" (UniqueName: \"kubernetes.io/projected/9a89c691-6f60-4aa1-a606-44d7d45e1446-kube-api-access-vrvbr\") on node \"crc\" DevicePath \"\"" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.181863 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerID="1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e" exitCode=0 Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.182240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerDied","Data":"1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e"} Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.182272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh9gn" event={"ID":"9a89c691-6f60-4aa1-a606-44d7d45e1446","Type":"ContainerDied","Data":"d13b65c6a8a67fccc930c984d57e7a05ab0f090a6bf2cc05a27fb51b7bfedfed"} Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.182296 4833 scope.go:117] "RemoveContainer" containerID="1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.182572 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh9gn" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.223166 4833 scope.go:117] "RemoveContainer" containerID="b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.227505 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh9gn"] Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.236618 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh9gn"] Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.253674 4833 scope.go:117] "RemoveContainer" containerID="59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.312593 4833 scope.go:117] "RemoveContainer" containerID="1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e" Jan 27 15:02:25 crc kubenswrapper[4833]: E0127 15:02:25.313067 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e\": container with ID starting with 1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e not found: ID does not exist" containerID="1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.313122 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e"} err="failed to get container status \"1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e\": rpc error: code = NotFound desc = could not find container \"1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e\": container with ID starting with 1532e23f72073ed16b9e5f1eacc0d4bd1854b5c2d8144c11f4c4e6183f7a087e not found: ID does not exist" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.313159 4833 scope.go:117] "RemoveContainer" containerID="b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f" Jan 27 15:02:25 crc kubenswrapper[4833]: E0127 15:02:25.313545 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f\": container with ID starting with b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f not found: ID does not exist" containerID="b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.313586 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f"} err="failed to get container status \"b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f\": rpc error: code = NotFound desc = could not find container \"b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f\": container with ID starting with b4415fe3b3f454000324556bf09138abd12175731569586c3ae465eb81a4907f not found: ID does not exist" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.313609 4833 scope.go:117] "RemoveContainer" containerID="59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63" Jan 27 15:02:25 crc kubenswrapper[4833]: E0127 15:02:25.314038 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63\": container with ID starting with 59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63 not found: ID does not exist" containerID="59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63" Jan 27 15:02:25 crc kubenswrapper[4833]: I0127 15:02:25.314082 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63"} err="failed to get container status \"59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63\": rpc error: code = NotFound desc = could not find container \"59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63\": container with ID starting with 59196f2f26d0d43ff3c7f02418ca849a38dffe7e01bed3af7a20c6a4834ced63 not found: ID does not exist" Jan 27 15:02:27 crc kubenswrapper[4833]: I0127 15:02:27.231616 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" path="/var/lib/kubelet/pods/9a89c691-6f60-4aa1-a606-44d7d45e1446/volumes" Jan 27 15:02:35 crc kubenswrapper[4833]: I0127 15:02:35.210958 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:02:35 crc kubenswrapper[4833]: E0127 15:02:35.211930 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:02:49 crc kubenswrapper[4833]: I0127 15:02:49.225737 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:02:49 crc kubenswrapper[4833]: E0127 15:02:49.226814 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:02:56 crc kubenswrapper[4833]: I0127 15:02:56.285006 4833 scope.go:117] "RemoveContainer" containerID="84673c17a8ce3eb7d8400c556cb2909b7ab2716ac42b5e6e2a17fc188437f7dd" Jan 27 15:02:56 crc kubenswrapper[4833]: I0127 15:02:56.320500 4833 scope.go:117] "RemoveContainer" containerID="18373ab7d29f736d6ffda7e4f9640af911a1f5ca79fd048f392f65e9fc37f911" Jan 27 15:02:56 crc kubenswrapper[4833]: I0127 15:02:56.358105 4833 scope.go:117] "RemoveContainer" containerID="7ec679a39de0550780b606f60e769fe8e30771f2dd57276dae080fe569d738f9" Jan 27 15:03:01 crc kubenswrapper[4833]: I0127 15:03:01.211435 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:03:01 crc kubenswrapper[4833]: E0127 15:03:01.213846 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:03:13 crc kubenswrapper[4833]: I0127 15:03:13.211705 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:03:13 crc kubenswrapper[4833]: E0127 15:03:13.212413 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:03:24 crc kubenswrapper[4833]: I0127 15:03:24.210812 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:03:24 crc kubenswrapper[4833]: E0127 15:03:24.212626 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:03:36 crc kubenswrapper[4833]: I0127 15:03:36.211211 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:03:36 crc kubenswrapper[4833]: E0127 15:03:36.211961 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:03:51 crc kubenswrapper[4833]: I0127 15:03:51.210863 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:03:51 crc kubenswrapper[4833]: E0127 15:03:51.211678 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:04:04 crc kubenswrapper[4833]: I0127 15:04:04.211594 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:04:04 crc kubenswrapper[4833]: E0127 15:04:04.212766 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:04:17 crc kubenswrapper[4833]: I0127 15:04:17.210624 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:04:17 crc kubenswrapper[4833]: E0127 15:04:17.211365 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:04:31 crc kubenswrapper[4833]: I0127 15:04:31.211728 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:04:31 crc kubenswrapper[4833]: E0127 15:04:31.212515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:04:43 crc kubenswrapper[4833]: I0127 15:04:43.211374 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:04:43 crc kubenswrapper[4833]: E0127 15:04:43.212153 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:04:58 crc kubenswrapper[4833]: I0127 15:04:58.211127 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:04:58 crc kubenswrapper[4833]: E0127 15:04:58.212247 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:05:09 crc kubenswrapper[4833]: I0127 15:05:09.222369 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:05:09 crc kubenswrapper[4833]: E0127 15:05:09.223515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:05:24 crc kubenswrapper[4833]: I0127 15:05:24.210700 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:05:24 crc kubenswrapper[4833]: E0127 15:05:24.211618 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:05:39 crc kubenswrapper[4833]: I0127 15:05:39.217809 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:05:39 crc kubenswrapper[4833]: E0127 15:05:39.219397 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:05:53 crc kubenswrapper[4833]: I0127 15:05:53.211310 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:05:53 crc kubenswrapper[4833]: E0127 15:05:53.212374 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:06:05 crc kubenswrapper[4833]: I0127 15:06:05.211219 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:06:05 crc kubenswrapper[4833]: I0127 15:06:05.577286 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"bbe494ad94fab06b5c945cd1023861cb7e77918e726ff00e19ce9efa476f38f5"} Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.042781 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c8cnw"] Jan 27 15:08:03 crc kubenswrapper[4833]: E0127 15:08:03.044849 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="registry-server" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.044885 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="registry-server" Jan 27 15:08:03 crc kubenswrapper[4833]: E0127 15:08:03.044940 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="extract-utilities" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.044950 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="extract-utilities" Jan 27 15:08:03 crc kubenswrapper[4833]: E0127 15:08:03.044983 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="extract-content" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.044991 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="extract-content" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.045311 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a89c691-6f60-4aa1-a606-44d7d45e1446" containerName="registry-server" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.047951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.068790 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8cnw"] Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.209030 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-catalog-content\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.209248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwj5\" (UniqueName: \"kubernetes.io/projected/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-kube-api-access-qtwj5\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.209591 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-utilities\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.311140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-catalog-content\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.311280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwj5\" (UniqueName: \"kubernetes.io/projected/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-kube-api-access-qtwj5\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.311378 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-utilities\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.311623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-catalog-content\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.311707 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-utilities\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.335227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwj5\" (UniqueName: \"kubernetes.io/projected/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-kube-api-access-qtwj5\") pod \"redhat-operators-c8cnw\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.374577 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:03 crc kubenswrapper[4833]: I0127 15:08:03.898229 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8cnw"] Jan 27 15:08:04 crc kubenswrapper[4833]: I0127 15:08:04.900173 4833 generic.go:334] "Generic (PLEG): container finished" podID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerID="09a30ff336d78aba19d3d7c64eef8ea78659a800a3c64383947cd933aa900cd6" exitCode=0 Jan 27 15:08:04 crc kubenswrapper[4833]: I0127 15:08:04.900242 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerDied","Data":"09a30ff336d78aba19d3d7c64eef8ea78659a800a3c64383947cd933aa900cd6"} Jan 27 15:08:04 crc kubenswrapper[4833]: I0127 15:08:04.900573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerStarted","Data":"c0eb353de2f07b0ee934c554d21a3486b50801e919511c96a92a93879a1bda61"} Jan 27 15:08:04 crc kubenswrapper[4833]: I0127 15:08:04.902998 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:08:05 crc kubenswrapper[4833]: I0127 15:08:05.910156 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerStarted","Data":"6f27db8f5adb04ddacf470459e0c44ce010626066fc9f92f7fb4667c5c9fdbdd"} Jan 27 15:08:11 crc kubenswrapper[4833]: I0127 15:08:11.963507 4833 generic.go:334] "Generic (PLEG): container finished" podID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerID="6f27db8f5adb04ddacf470459e0c44ce010626066fc9f92f7fb4667c5c9fdbdd" exitCode=0 Jan 27 15:08:11 crc kubenswrapper[4833]: I0127 15:08:11.963586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerDied","Data":"6f27db8f5adb04ddacf470459e0c44ce010626066fc9f92f7fb4667c5c9fdbdd"} Jan 27 15:08:19 crc kubenswrapper[4833]: I0127 15:08:19.028418 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerStarted","Data":"2d37a4d2753368eabed803163bd716824f2cd9d94b67c0252c1075e482db8617"} Jan 27 15:08:19 crc kubenswrapper[4833]: I0127 15:08:19.062091 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c8cnw" podStartSLOduration=3.147050072 podStartE2EDuration="16.062068672s" podCreationTimestamp="2026-01-27 15:08:03 +0000 UTC" firstStartedPulling="2026-01-27 15:08:04.902800973 +0000 UTC m=+3386.554125375" lastFinishedPulling="2026-01-27 15:08:17.817819573 +0000 UTC m=+3399.469143975" observedRunningTime="2026-01-27 15:08:19.047558617 +0000 UTC m=+3400.698883039" watchObservedRunningTime="2026-01-27 15:08:19.062068672 +0000 UTC m=+3400.713393074" Jan 27 15:08:23 crc kubenswrapper[4833]: I0127 15:08:23.375318 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:23 crc kubenswrapper[4833]: I0127 15:08:23.375892 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:24 crc kubenswrapper[4833]: I0127 15:08:24.417268 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c8cnw" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="registry-server" probeResult="failure" output=< Jan 27 15:08:24 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 15:08:24 crc kubenswrapper[4833]: > Jan 27 15:08:32 crc kubenswrapper[4833]: I0127 15:08:32.260734 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:08:32 crc kubenswrapper[4833]: I0127 15:08:32.261284 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:08:33 crc kubenswrapper[4833]: I0127 15:08:33.427067 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:33 crc kubenswrapper[4833]: I0127 15:08:33.482202 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:34 crc kubenswrapper[4833]: I0127 15:08:34.252611 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8cnw"] Jan 27 15:08:35 crc kubenswrapper[4833]: I0127 15:08:35.174287 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c8cnw" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="registry-server" containerID="cri-o://2d37a4d2753368eabed803163bd716824f2cd9d94b67c0252c1075e482db8617" gracePeriod=2 Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.187122 4833 generic.go:334] "Generic (PLEG): container finished" podID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerID="2d37a4d2753368eabed803163bd716824f2cd9d94b67c0252c1075e482db8617" exitCode=0 Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.187236 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerDied","Data":"2d37a4d2753368eabed803163bd716824f2cd9d94b67c0252c1075e482db8617"} Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.360921 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.523855 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtwj5\" (UniqueName: \"kubernetes.io/projected/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-kube-api-access-qtwj5\") pod \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.524042 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-catalog-content\") pod \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.524183 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-utilities\") pod \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\" (UID: \"a4d872c6-e44d-4912-ab3e-42b14ff0ad77\") " Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.525219 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-utilities" (OuterVolumeSpecName: "utilities") pod "a4d872c6-e44d-4912-ab3e-42b14ff0ad77" (UID: "a4d872c6-e44d-4912-ab3e-42b14ff0ad77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.530437 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-kube-api-access-qtwj5" (OuterVolumeSpecName: "kube-api-access-qtwj5") pod "a4d872c6-e44d-4912-ab3e-42b14ff0ad77" (UID: "a4d872c6-e44d-4912-ab3e-42b14ff0ad77"). InnerVolumeSpecName "kube-api-access-qtwj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.627033 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.627075 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtwj5\" (UniqueName: \"kubernetes.io/projected/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-kube-api-access-qtwj5\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.638843 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4d872c6-e44d-4912-ab3e-42b14ff0ad77" (UID: "a4d872c6-e44d-4912-ab3e-42b14ff0ad77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:36 crc kubenswrapper[4833]: I0127 15:08:36.729047 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d872c6-e44d-4912-ab3e-42b14ff0ad77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.203360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8cnw" event={"ID":"a4d872c6-e44d-4912-ab3e-42b14ff0ad77","Type":"ContainerDied","Data":"c0eb353de2f07b0ee934c554d21a3486b50801e919511c96a92a93879a1bda61"} Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.203468 4833 scope.go:117] "RemoveContainer" containerID="2d37a4d2753368eabed803163bd716824f2cd9d94b67c0252c1075e482db8617" Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.203540 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8cnw" Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.239648 4833 scope.go:117] "RemoveContainer" containerID="6f27db8f5adb04ddacf470459e0c44ce010626066fc9f92f7fb4667c5c9fdbdd" Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.264955 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8cnw"] Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.281778 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c8cnw"] Jan 27 15:08:37 crc kubenswrapper[4833]: I0127 15:08:37.289543 4833 scope.go:117] "RemoveContainer" containerID="09a30ff336d78aba19d3d7c64eef8ea78659a800a3c64383947cd933aa900cd6" Jan 27 15:08:39 crc kubenswrapper[4833]: I0127 15:08:39.233046 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" path="/var/lib/kubelet/pods/a4d872c6-e44d-4912-ab3e-42b14ff0ad77/volumes" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.588646 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bq52p"] Jan 27 15:08:43 crc kubenswrapper[4833]: E0127 15:08:43.589702 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="registry-server" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.589717 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="registry-server" Jan 27 15:08:43 crc kubenswrapper[4833]: E0127 15:08:43.589737 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="extract-content" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.589743 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="extract-content" Jan 27 15:08:43 crc kubenswrapper[4833]: E0127 15:08:43.589759 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="extract-utilities" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.589766 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="extract-utilities" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.590020 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d872c6-e44d-4912-ab3e-42b14ff0ad77" containerName="registry-server" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.591722 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.613164 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq52p"] Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.673645 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7p4\" (UniqueName: \"kubernetes.io/projected/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-kube-api-access-bj7p4\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.673728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-utilities\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.673763 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-catalog-content\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.775459 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7p4\" (UniqueName: \"kubernetes.io/projected/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-kube-api-access-bj7p4\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.775521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-utilities\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.775549 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-catalog-content\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.776076 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-catalog-content\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.776182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-utilities\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.802928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7p4\" (UniqueName: \"kubernetes.io/projected/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-kube-api-access-bj7p4\") pod \"certified-operators-bq52p\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:43 crc kubenswrapper[4833]: I0127 15:08:43.913223 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:44 crc kubenswrapper[4833]: I0127 15:08:44.468453 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq52p"] Jan 27 15:08:45 crc kubenswrapper[4833]: I0127 15:08:45.293434 4833 generic.go:334] "Generic (PLEG): container finished" podID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerID="f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42" exitCode=0 Jan 27 15:08:45 crc kubenswrapper[4833]: I0127 15:08:45.293570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq52p" event={"ID":"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801","Type":"ContainerDied","Data":"f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42"} Jan 27 15:08:45 crc kubenswrapper[4833]: I0127 15:08:45.293937 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq52p" event={"ID":"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801","Type":"ContainerStarted","Data":"5a1c8adfd02670a99edcc70b43766591804d5d3c9dcefd388749eea371d1752b"} Jan 27 15:08:47 crc kubenswrapper[4833]: I0127 15:08:47.316862 4833 generic.go:334] "Generic (PLEG): container finished" podID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerID="8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c" exitCode=0 Jan 27 15:08:47 crc kubenswrapper[4833]: I0127 15:08:47.316979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq52p" event={"ID":"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801","Type":"ContainerDied","Data":"8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c"} Jan 27 15:08:48 crc kubenswrapper[4833]: I0127 15:08:48.328742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq52p" event={"ID":"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801","Type":"ContainerStarted","Data":"e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5"} Jan 27 15:08:49 crc kubenswrapper[4833]: I0127 15:08:49.357914 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bq52p" podStartSLOduration=3.657706084 podStartE2EDuration="6.357889608s" podCreationTimestamp="2026-01-27 15:08:43 +0000 UTC" firstStartedPulling="2026-01-27 15:08:45.295995622 +0000 UTC m=+3426.947320064" lastFinishedPulling="2026-01-27 15:08:47.996179166 +0000 UTC m=+3429.647503588" observedRunningTime="2026-01-27 15:08:49.355246043 +0000 UTC m=+3431.006570455" watchObservedRunningTime="2026-01-27 15:08:49.357889608 +0000 UTC m=+3431.009214010" Jan 27 15:08:53 crc kubenswrapper[4833]: I0127 15:08:53.913736 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:53 crc kubenswrapper[4833]: I0127 15:08:53.914248 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:53 crc kubenswrapper[4833]: I0127 15:08:53.975837 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:54 crc kubenswrapper[4833]: I0127 15:08:54.462718 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:54 crc kubenswrapper[4833]: I0127 15:08:54.510970 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq52p"] Jan 27 15:08:56 crc kubenswrapper[4833]: I0127 15:08:56.409593 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bq52p" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="registry-server" containerID="cri-o://e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5" gracePeriod=2 Jan 27 15:08:56 crc kubenswrapper[4833]: I0127 15:08:56.949942 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.054497 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj7p4\" (UniqueName: \"kubernetes.io/projected/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-kube-api-access-bj7p4\") pod \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.055643 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-catalog-content\") pod \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.055704 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-utilities\") pod \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\" (UID: \"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801\") " Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.056845 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-utilities" (OuterVolumeSpecName: "utilities") pod "64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" (UID: "64da633a-c13d-4dc0-a0b0-9cbcaa4f5801"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.060045 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-kube-api-access-bj7p4" (OuterVolumeSpecName: "kube-api-access-bj7p4") pod "64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" (UID: "64da633a-c13d-4dc0-a0b0-9cbcaa4f5801"). InnerVolumeSpecName "kube-api-access-bj7p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.115687 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" (UID: "64da633a-c13d-4dc0-a0b0-9cbcaa4f5801"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.158279 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj7p4\" (UniqueName: \"kubernetes.io/projected/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-kube-api-access-bj7p4\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.158314 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.158323 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.424558 4833 generic.go:334] "Generic (PLEG): container finished" podID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerID="e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5" exitCode=0 Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.424635 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq52p" event={"ID":"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801","Type":"ContainerDied","Data":"e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5"} Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.424654 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq52p" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.424685 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq52p" event={"ID":"64da633a-c13d-4dc0-a0b0-9cbcaa4f5801","Type":"ContainerDied","Data":"5a1c8adfd02670a99edcc70b43766591804d5d3c9dcefd388749eea371d1752b"} Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.424715 4833 scope.go:117] "RemoveContainer" containerID="e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.460097 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq52p"] Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.460193 4833 scope.go:117] "RemoveContainer" containerID="8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.468971 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bq52p"] Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.483797 4833 scope.go:117] "RemoveContainer" containerID="f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.548873 4833 scope.go:117] "RemoveContainer" containerID="e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5" Jan 27 15:08:57 crc kubenswrapper[4833]: E0127 15:08:57.550532 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5\": container with ID starting with e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5 not found: ID does not exist" containerID="e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.550589 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5"} err="failed to get container status \"e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5\": rpc error: code = NotFound desc = could not find container \"e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5\": container with ID starting with e1fb2f2f7cc92513f05be7caba273423f9fc8e54f848ac169b7c7e762e938de5 not found: ID does not exist" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.550624 4833 scope.go:117] "RemoveContainer" containerID="8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c" Jan 27 15:08:57 crc kubenswrapper[4833]: E0127 15:08:57.551988 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c\": container with ID starting with 8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c not found: ID does not exist" containerID="8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.552059 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c"} err="failed to get container status \"8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c\": rpc error: code = NotFound desc = could not find container \"8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c\": container with ID starting with 8d39c78b2d5e699a8eddf13fe5e03ddab500f4a234878fecb1b5d19bf431186c not found: ID does not exist" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.552086 4833 scope.go:117] "RemoveContainer" containerID="f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42" Jan 27 15:08:57 crc kubenswrapper[4833]: E0127 15:08:57.552431 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42\": container with ID starting with f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42 not found: ID does not exist" containerID="f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42" Jan 27 15:08:57 crc kubenswrapper[4833]: I0127 15:08:57.552502 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42"} err="failed to get container status \"f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42\": rpc error: code = NotFound desc = could not find container \"f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42\": container with ID starting with f699ad0e171a4e8508558bf928719333e210c48498f567ef13243d658e59ba42 not found: ID does not exist" Jan 27 15:08:59 crc kubenswrapper[4833]: I0127 15:08:59.225317 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" path="/var/lib/kubelet/pods/64da633a-c13d-4dc0-a0b0-9cbcaa4f5801/volumes" Jan 27 15:09:02 crc kubenswrapper[4833]: I0127 15:09:02.261015 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:09:02 crc kubenswrapper[4833]: I0127 15:09:02.261725 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.260778 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.261382 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.261426 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.262187 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbe494ad94fab06b5c945cd1023861cb7e77918e726ff00e19ce9efa476f38f5"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.262260 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://bbe494ad94fab06b5c945cd1023861cb7e77918e726ff00e19ce9efa476f38f5" gracePeriod=600 Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.881341 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="bbe494ad94fab06b5c945cd1023861cb7e77918e726ff00e19ce9efa476f38f5" exitCode=0 Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.881418 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"bbe494ad94fab06b5c945cd1023861cb7e77918e726ff00e19ce9efa476f38f5"} Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.881678 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc"} Jan 27 15:09:32 crc kubenswrapper[4833]: I0127 15:09:32.881699 4833 scope.go:117] "RemoveContainer" containerID="116956ab860bda32936da708929cb7df8b287b0b97240f2e87f6e7bb7256a862" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.143673 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dtsz"] Jan 27 15:09:39 crc kubenswrapper[4833]: E0127 15:09:39.144944 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="extract-content" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.144966 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="extract-content" Jan 27 15:09:39 crc kubenswrapper[4833]: E0127 15:09:39.145000 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="extract-utilities" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.145010 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="extract-utilities" Jan 27 15:09:39 crc kubenswrapper[4833]: E0127 15:09:39.145023 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="registry-server" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.145034 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="registry-server" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.145307 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="64da633a-c13d-4dc0-a0b0-9cbcaa4f5801" containerName="registry-server" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.147513 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.156053 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dtsz"] Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.160892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-utilities\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.161034 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzrq\" (UniqueName: \"kubernetes.io/projected/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-kube-api-access-ltzrq\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.161085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-catalog-content\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.263171 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltzrq\" (UniqueName: \"kubernetes.io/projected/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-kube-api-access-ltzrq\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.263785 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-catalog-content\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.263905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-utilities\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.270892 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-catalog-content\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.271437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-utilities\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.287657 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltzrq\" (UniqueName: \"kubernetes.io/projected/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-kube-api-access-ltzrq\") pod \"community-operators-6dtsz\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:39 crc kubenswrapper[4833]: I0127 15:09:39.480238 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:40 crc kubenswrapper[4833]: I0127 15:09:40.042213 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dtsz"] Jan 27 15:09:40 crc kubenswrapper[4833]: I0127 15:09:40.963366 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerID="174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689" exitCode=0 Jan 27 15:09:40 crc kubenswrapper[4833]: I0127 15:09:40.963449 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dtsz" event={"ID":"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32","Type":"ContainerDied","Data":"174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689"} Jan 27 15:09:40 crc kubenswrapper[4833]: I0127 15:09:40.963681 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dtsz" event={"ID":"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32","Type":"ContainerStarted","Data":"61cd2e2987757eeaf8c7014db6c9cb918944286c928ab47b41541eb247e2ee88"} Jan 27 15:09:42 crc kubenswrapper[4833]: I0127 15:09:42.988900 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerID="9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519" exitCode=0 Jan 27 15:09:42 crc kubenswrapper[4833]: I0127 15:09:42.989023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dtsz" event={"ID":"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32","Type":"ContainerDied","Data":"9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519"} Jan 27 15:09:44 crc kubenswrapper[4833]: I0127 15:09:44.005325 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dtsz" event={"ID":"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32","Type":"ContainerStarted","Data":"b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3"} Jan 27 15:09:44 crc kubenswrapper[4833]: I0127 15:09:44.031925 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dtsz" podStartSLOduration=2.590991386 podStartE2EDuration="5.03190436s" podCreationTimestamp="2026-01-27 15:09:39 +0000 UTC" firstStartedPulling="2026-01-27 15:09:40.964941584 +0000 UTC m=+3482.616265996" lastFinishedPulling="2026-01-27 15:09:43.405854568 +0000 UTC m=+3485.057178970" observedRunningTime="2026-01-27 15:09:44.029829641 +0000 UTC m=+3485.681154043" watchObservedRunningTime="2026-01-27 15:09:44.03190436 +0000 UTC m=+3485.683228762" Jan 27 15:09:49 crc kubenswrapper[4833]: I0127 15:09:49.480789 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:49 crc kubenswrapper[4833]: I0127 15:09:49.481325 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:49 crc kubenswrapper[4833]: I0127 15:09:49.527149 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:50 crc kubenswrapper[4833]: I0127 15:09:50.112238 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:50 crc kubenswrapper[4833]: I0127 15:09:50.168178 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dtsz"] Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.079652 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dtsz" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="registry-server" containerID="cri-o://b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3" gracePeriod=2 Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.564533 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.646295 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-utilities\") pod \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.646365 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltzrq\" (UniqueName: \"kubernetes.io/projected/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-kube-api-access-ltzrq\") pod \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.646396 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-catalog-content\") pod \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\" (UID: \"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32\") " Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.647494 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-utilities" (OuterVolumeSpecName: "utilities") pod "dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" (UID: "dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.654036 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-kube-api-access-ltzrq" (OuterVolumeSpecName: "kube-api-access-ltzrq") pod "dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" (UID: "dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32"). InnerVolumeSpecName "kube-api-access-ltzrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.711254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" (UID: "dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.749419 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.749470 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltzrq\" (UniqueName: \"kubernetes.io/projected/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-kube-api-access-ltzrq\") on node \"crc\" DevicePath \"\"" Jan 27 15:09:52 crc kubenswrapper[4833]: I0127 15:09:52.749485 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.094888 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerID="b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3" exitCode=0 Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.094992 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dtsz" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.094989 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dtsz" event={"ID":"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32","Type":"ContainerDied","Data":"b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3"} Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.096818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dtsz" event={"ID":"dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32","Type":"ContainerDied","Data":"61cd2e2987757eeaf8c7014db6c9cb918944286c928ab47b41541eb247e2ee88"} Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.096875 4833 scope.go:117] "RemoveContainer" containerID="b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.135827 4833 scope.go:117] "RemoveContainer" containerID="9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.143662 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dtsz"] Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.153963 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dtsz"] Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.174329 4833 scope.go:117] "RemoveContainer" containerID="174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.206424 4833 scope.go:117] "RemoveContainer" containerID="b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3" Jan 27 15:09:53 crc kubenswrapper[4833]: E0127 15:09:53.206918 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3\": container with ID starting with b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3 not found: ID does not exist" containerID="b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.206990 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3"} err="failed to get container status \"b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3\": rpc error: code = NotFound desc = could not find container \"b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3\": container with ID starting with b06e0cfefb2adc8bfb7754ad3bf9cef9dbaab2948cce46e8baef4f1f8cfe95a3 not found: ID does not exist" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.207069 4833 scope.go:117] "RemoveContainer" containerID="9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519" Jan 27 15:09:53 crc kubenswrapper[4833]: E0127 15:09:53.207496 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519\": container with ID starting with 9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519 not found: ID does not exist" containerID="9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.207608 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519"} err="failed to get container status \"9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519\": rpc error: code = NotFound desc = could not find container \"9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519\": container with ID starting with 9ad0c9e38a28a4f95ac05e4e62ca3c6222d3549d066cea18535095ddf0810519 not found: ID does not exist" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.207692 4833 scope.go:117] "RemoveContainer" containerID="174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689" Jan 27 15:09:53 crc kubenswrapper[4833]: E0127 15:09:53.208037 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689\": container with ID starting with 174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689 not found: ID does not exist" containerID="174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.208126 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689"} err="failed to get container status \"174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689\": rpc error: code = NotFound desc = could not find container \"174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689\": container with ID starting with 174c87b822dfc0ad525f852bfd6ce5ad0616b3458312f4bbd9fb037360df4689 not found: ID does not exist" Jan 27 15:09:53 crc kubenswrapper[4833]: I0127 15:09:53.225264 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" path="/var/lib/kubelet/pods/dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32/volumes" Jan 27 15:11:32 crc kubenswrapper[4833]: I0127 15:11:32.261160 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:11:32 crc kubenswrapper[4833]: I0127 15:11:32.261830 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:12:02 crc kubenswrapper[4833]: I0127 15:12:02.260769 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:12:02 crc kubenswrapper[4833]: I0127 15:12:02.261305 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.435070 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28vfn"] Jan 27 15:12:06 crc kubenswrapper[4833]: E0127 15:12:06.436594 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="extract-utilities" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.436681 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="extract-utilities" Jan 27 15:12:06 crc kubenswrapper[4833]: E0127 15:12:06.436748 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="extract-content" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.436808 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="extract-content" Jan 27 15:12:06 crc kubenswrapper[4833]: E0127 15:12:06.436900 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="registry-server" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.436966 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="registry-server" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.437246 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7d2d8c-db9c-4b0c-8b77-62dadd1d3e32" containerName="registry-server" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.438843 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.446622 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28vfn"] Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.568850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq6rw\" (UniqueName: \"kubernetes.io/projected/5d616403-a731-4349-8185-78d49ed18eb1-kube-api-access-jq6rw\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.568926 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-utilities\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.569087 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-catalog-content\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.671007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-catalog-content\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.671138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq6rw\" (UniqueName: \"kubernetes.io/projected/5d616403-a731-4349-8185-78d49ed18eb1-kube-api-access-jq6rw\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.671164 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-utilities\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.671619 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-catalog-content\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.671766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-utilities\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.704271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq6rw\" (UniqueName: \"kubernetes.io/projected/5d616403-a731-4349-8185-78d49ed18eb1-kube-api-access-jq6rw\") pod \"redhat-marketplace-28vfn\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:06 crc kubenswrapper[4833]: I0127 15:12:06.763099 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:07 crc kubenswrapper[4833]: I0127 15:12:07.351423 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28vfn"] Jan 27 15:12:07 crc kubenswrapper[4833]: W0127 15:12:07.360381 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d616403_a731_4349_8185_78d49ed18eb1.slice/crio-4b7ea7bd7544b0d9fb5e29c22502c99ad52bd470298ef8ab5884bddab339eb78 WatchSource:0}: Error finding container 4b7ea7bd7544b0d9fb5e29c22502c99ad52bd470298ef8ab5884bddab339eb78: Status 404 returned error can't find the container with id 4b7ea7bd7544b0d9fb5e29c22502c99ad52bd470298ef8ab5884bddab339eb78 Jan 27 15:12:07 crc kubenswrapper[4833]: I0127 15:12:07.396489 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerStarted","Data":"4b7ea7bd7544b0d9fb5e29c22502c99ad52bd470298ef8ab5884bddab339eb78"} Jan 27 15:12:08 crc kubenswrapper[4833]: I0127 15:12:08.407610 4833 generic.go:334] "Generic (PLEG): container finished" podID="5d616403-a731-4349-8185-78d49ed18eb1" containerID="ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df" exitCode=0 Jan 27 15:12:08 crc kubenswrapper[4833]: I0127 15:12:08.407682 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerDied","Data":"ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df"} Jan 27 15:12:10 crc kubenswrapper[4833]: I0127 15:12:10.424843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerStarted","Data":"3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b"} Jan 27 15:12:12 crc kubenswrapper[4833]: I0127 15:12:12.445204 4833 generic.go:334] "Generic (PLEG): container finished" podID="5d616403-a731-4349-8185-78d49ed18eb1" containerID="3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b" exitCode=0 Jan 27 15:12:12 crc kubenswrapper[4833]: I0127 15:12:12.445248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerDied","Data":"3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b"} Jan 27 15:12:13 crc kubenswrapper[4833]: I0127 15:12:13.466643 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerStarted","Data":"f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d"} Jan 27 15:12:13 crc kubenswrapper[4833]: I0127 15:12:13.501557 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28vfn" podStartSLOduration=2.846401902 podStartE2EDuration="7.501525946s" podCreationTimestamp="2026-01-27 15:12:06 +0000 UTC" firstStartedPulling="2026-01-27 15:12:08.409493681 +0000 UTC m=+3630.060818083" lastFinishedPulling="2026-01-27 15:12:13.064617725 +0000 UTC m=+3634.715942127" observedRunningTime="2026-01-27 15:12:13.489280243 +0000 UTC m=+3635.140604665" watchObservedRunningTime="2026-01-27 15:12:13.501525946 +0000 UTC m=+3635.152850348" Jan 27 15:12:16 crc kubenswrapper[4833]: I0127 15:12:16.764319 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:16 crc kubenswrapper[4833]: I0127 15:12:16.764979 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:16 crc kubenswrapper[4833]: I0127 15:12:16.823023 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:26 crc kubenswrapper[4833]: I0127 15:12:26.817973 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:26 crc kubenswrapper[4833]: I0127 15:12:26.874966 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28vfn"] Jan 27 15:12:27 crc kubenswrapper[4833]: I0127 15:12:27.606765 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28vfn" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="registry-server" containerID="cri-o://f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d" gracePeriod=2 Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.103020 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.274084 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-catalog-content\") pod \"5d616403-a731-4349-8185-78d49ed18eb1\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.274144 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq6rw\" (UniqueName: \"kubernetes.io/projected/5d616403-a731-4349-8185-78d49ed18eb1-kube-api-access-jq6rw\") pod \"5d616403-a731-4349-8185-78d49ed18eb1\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.274270 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-utilities\") pod \"5d616403-a731-4349-8185-78d49ed18eb1\" (UID: \"5d616403-a731-4349-8185-78d49ed18eb1\") " Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.275365 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-utilities" (OuterVolumeSpecName: "utilities") pod "5d616403-a731-4349-8185-78d49ed18eb1" (UID: "5d616403-a731-4349-8185-78d49ed18eb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.280594 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d616403-a731-4349-8185-78d49ed18eb1-kube-api-access-jq6rw" (OuterVolumeSpecName: "kube-api-access-jq6rw") pod "5d616403-a731-4349-8185-78d49ed18eb1" (UID: "5d616403-a731-4349-8185-78d49ed18eb1"). InnerVolumeSpecName "kube-api-access-jq6rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.295507 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d616403-a731-4349-8185-78d49ed18eb1" (UID: "5d616403-a731-4349-8185-78d49ed18eb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.376895 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.376935 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq6rw\" (UniqueName: \"kubernetes.io/projected/5d616403-a731-4349-8185-78d49ed18eb1-kube-api-access-jq6rw\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.376951 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d616403-a731-4349-8185-78d49ed18eb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.623807 4833 generic.go:334] "Generic (PLEG): container finished" podID="5d616403-a731-4349-8185-78d49ed18eb1" containerID="f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d" exitCode=0 Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.623854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerDied","Data":"f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d"} Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.623860 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28vfn" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.623889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28vfn" event={"ID":"5d616403-a731-4349-8185-78d49ed18eb1","Type":"ContainerDied","Data":"4b7ea7bd7544b0d9fb5e29c22502c99ad52bd470298ef8ab5884bddab339eb78"} Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.623910 4833 scope.go:117] "RemoveContainer" containerID="f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.665110 4833 scope.go:117] "RemoveContainer" containerID="3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.672583 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28vfn"] Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.685207 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28vfn"] Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.690241 4833 scope.go:117] "RemoveContainer" containerID="ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.732832 4833 scope.go:117] "RemoveContainer" containerID="f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d" Jan 27 15:12:28 crc kubenswrapper[4833]: E0127 15:12:28.733333 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d\": container with ID starting with f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d not found: ID does not exist" containerID="f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.733383 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d"} err="failed to get container status \"f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d\": rpc error: code = NotFound desc = could not find container \"f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d\": container with ID starting with f67cd1fcf4afe5c3137b483b3c2a236c9346b749d9324be23f23b7602174d51d not found: ID does not exist" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.733416 4833 scope.go:117] "RemoveContainer" containerID="3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b" Jan 27 15:12:28 crc kubenswrapper[4833]: E0127 15:12:28.733753 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b\": container with ID starting with 3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b not found: ID does not exist" containerID="3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.733797 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b"} err="failed to get container status \"3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b\": rpc error: code = NotFound desc = could not find container \"3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b\": container with ID starting with 3219a3fbd4337531067303b618d14dc8b620b8a2c32535d55a452de75c93744b not found: ID does not exist" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.733826 4833 scope.go:117] "RemoveContainer" containerID="ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df" Jan 27 15:12:28 crc kubenswrapper[4833]: E0127 15:12:28.734198 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df\": container with ID starting with ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df not found: ID does not exist" containerID="ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df" Jan 27 15:12:28 crc kubenswrapper[4833]: I0127 15:12:28.734246 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df"} err="failed to get container status \"ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df\": rpc error: code = NotFound desc = could not find container \"ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df\": container with ID starting with ba841613896aa485962c3144280414b9da80af269b2a79518b3dde513d0530df not found: ID does not exist" Jan 27 15:12:29 crc kubenswrapper[4833]: I0127 15:12:29.247198 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d616403-a731-4349-8185-78d49ed18eb1" path="/var/lib/kubelet/pods/5d616403-a731-4349-8185-78d49ed18eb1/volumes" Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.260654 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.260723 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.260775 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.261333 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.261390 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" gracePeriod=600 Jan 27 15:12:32 crc kubenswrapper[4833]: E0127 15:12:32.398000 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.686691 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" exitCode=0 Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.686825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc"} Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.686931 4833 scope.go:117] "RemoveContainer" containerID="bbe494ad94fab06b5c945cd1023861cb7e77918e726ff00e19ce9efa476f38f5" Jan 27 15:12:32 crc kubenswrapper[4833]: I0127 15:12:32.688010 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:12:32 crc kubenswrapper[4833]: E0127 15:12:32.688435 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:12:48 crc kubenswrapper[4833]: I0127 15:12:48.210637 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:12:48 crc kubenswrapper[4833]: E0127 15:12:48.211590 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:13:03 crc kubenswrapper[4833]: I0127 15:13:03.211920 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:13:03 crc kubenswrapper[4833]: E0127 15:13:03.212840 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:13:14 crc kubenswrapper[4833]: I0127 15:13:14.212341 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:13:14 crc kubenswrapper[4833]: E0127 15:13:14.213834 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:13:28 crc kubenswrapper[4833]: I0127 15:13:28.210889 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:13:28 crc kubenswrapper[4833]: E0127 15:13:28.212869 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:13:41 crc kubenswrapper[4833]: I0127 15:13:41.210643 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:13:41 crc kubenswrapper[4833]: E0127 15:13:41.211401 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:13:55 crc kubenswrapper[4833]: I0127 15:13:55.220574 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:13:55 crc kubenswrapper[4833]: E0127 15:13:55.224307 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:14:07 crc kubenswrapper[4833]: I0127 15:14:07.211211 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:14:07 crc kubenswrapper[4833]: E0127 15:14:07.212592 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:14:18 crc kubenswrapper[4833]: I0127 15:14:18.211150 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:14:18 crc kubenswrapper[4833]: E0127 15:14:18.212348 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:14:29 crc kubenswrapper[4833]: I0127 15:14:29.232962 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:14:29 crc kubenswrapper[4833]: E0127 15:14:29.234907 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:14:42 crc kubenswrapper[4833]: I0127 15:14:42.211558 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:14:42 crc kubenswrapper[4833]: E0127 15:14:42.212935 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:14:53 crc kubenswrapper[4833]: I0127 15:14:53.211297 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:14:53 crc kubenswrapper[4833]: E0127 15:14:53.212275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.176857 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr"] Jan 27 15:15:00 crc kubenswrapper[4833]: E0127 15:15:00.177977 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="extract-utilities" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.177997 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="extract-utilities" Jan 27 15:15:00 crc kubenswrapper[4833]: E0127 15:15:00.178039 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.178047 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4833]: E0127 15:15:00.178062 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="extract-content" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.178070 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="extract-content" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.178321 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d616403-a731-4349-8185-78d49ed18eb1" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.179221 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.181460 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.183182 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.208622 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr"] Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.286467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5002bc-279f-4942-8b5f-b4b7974680c8-config-volume\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.286569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhf6\" (UniqueName: \"kubernetes.io/projected/df5002bc-279f-4942-8b5f-b4b7974680c8-kube-api-access-tkhf6\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.287251 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5002bc-279f-4942-8b5f-b4b7974680c8-secret-volume\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.389166 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5002bc-279f-4942-8b5f-b4b7974680c8-config-volume\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.389268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkhf6\" (UniqueName: \"kubernetes.io/projected/df5002bc-279f-4942-8b5f-b4b7974680c8-kube-api-access-tkhf6\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.389360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5002bc-279f-4942-8b5f-b4b7974680c8-secret-volume\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.390933 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5002bc-279f-4942-8b5f-b4b7974680c8-config-volume\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.621482 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5002bc-279f-4942-8b5f-b4b7974680c8-secret-volume\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.621664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkhf6\" (UniqueName: \"kubernetes.io/projected/df5002bc-279f-4942-8b5f-b4b7974680c8-kube-api-access-tkhf6\") pod \"collect-profiles-29492115-2vdwr\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:00 crc kubenswrapper[4833]: I0127 15:15:00.809115 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:01 crc kubenswrapper[4833]: I0127 15:15:01.345411 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr"] Jan 27 15:15:02 crc kubenswrapper[4833]: I0127 15:15:02.109749 4833 generic.go:334] "Generic (PLEG): container finished" podID="df5002bc-279f-4942-8b5f-b4b7974680c8" containerID="6646ce46e07ae290717a84f3ba7e6c84697c492cfee17f1184f5691e0e75479d" exitCode=0 Jan 27 15:15:02 crc kubenswrapper[4833]: I0127 15:15:02.109846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" event={"ID":"df5002bc-279f-4942-8b5f-b4b7974680c8","Type":"ContainerDied","Data":"6646ce46e07ae290717a84f3ba7e6c84697c492cfee17f1184f5691e0e75479d"} Jan 27 15:15:02 crc kubenswrapper[4833]: I0127 15:15:02.110093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" event={"ID":"df5002bc-279f-4942-8b5f-b4b7974680c8","Type":"ContainerStarted","Data":"660be74a04dd4c0fc245508f87efb6f47fe08eb6cb5b49213e01cbc638b77307"} Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.500796 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.669179 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5002bc-279f-4942-8b5f-b4b7974680c8-config-volume\") pod \"df5002bc-279f-4942-8b5f-b4b7974680c8\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.669309 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkhf6\" (UniqueName: \"kubernetes.io/projected/df5002bc-279f-4942-8b5f-b4b7974680c8-kube-api-access-tkhf6\") pod \"df5002bc-279f-4942-8b5f-b4b7974680c8\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.669983 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5002bc-279f-4942-8b5f-b4b7974680c8-config-volume" (OuterVolumeSpecName: "config-volume") pod "df5002bc-279f-4942-8b5f-b4b7974680c8" (UID: "df5002bc-279f-4942-8b5f-b4b7974680c8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.670551 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5002bc-279f-4942-8b5f-b4b7974680c8-secret-volume\") pod \"df5002bc-279f-4942-8b5f-b4b7974680c8\" (UID: \"df5002bc-279f-4942-8b5f-b4b7974680c8\") " Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.671052 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df5002bc-279f-4942-8b5f-b4b7974680c8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.675272 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5002bc-279f-4942-8b5f-b4b7974680c8-kube-api-access-tkhf6" (OuterVolumeSpecName: "kube-api-access-tkhf6") pod "df5002bc-279f-4942-8b5f-b4b7974680c8" (UID: "df5002bc-279f-4942-8b5f-b4b7974680c8"). InnerVolumeSpecName "kube-api-access-tkhf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.675431 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5002bc-279f-4942-8b5f-b4b7974680c8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df5002bc-279f-4942-8b5f-b4b7974680c8" (UID: "df5002bc-279f-4942-8b5f-b4b7974680c8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.772736 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkhf6\" (UniqueName: \"kubernetes.io/projected/df5002bc-279f-4942-8b5f-b4b7974680c8-kube-api-access-tkhf6\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:03 crc kubenswrapper[4833]: I0127 15:15:03.772767 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df5002bc-279f-4942-8b5f-b4b7974680c8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4833]: I0127 15:15:04.128815 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" event={"ID":"df5002bc-279f-4942-8b5f-b4b7974680c8","Type":"ContainerDied","Data":"660be74a04dd4c0fc245508f87efb6f47fe08eb6cb5b49213e01cbc638b77307"} Jan 27 15:15:04 crc kubenswrapper[4833]: I0127 15:15:04.129117 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660be74a04dd4c0fc245508f87efb6f47fe08eb6cb5b49213e01cbc638b77307" Jan 27 15:15:04 crc kubenswrapper[4833]: I0127 15:15:04.129182 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-2vdwr" Jan 27 15:15:04 crc kubenswrapper[4833]: E0127 15:15:04.359596 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf5002bc_279f_4942_8b5f_b4b7974680c8.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:15:04 crc kubenswrapper[4833]: I0127 15:15:04.586917 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm"] Jan 27 15:15:04 crc kubenswrapper[4833]: I0127 15:15:04.599324 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-8xzgm"] Jan 27 15:15:05 crc kubenswrapper[4833]: I0127 15:15:05.250235 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a87d6aa-97b8-4bd0-afe4-92f991e99a1a" path="/var/lib/kubelet/pods/9a87d6aa-97b8-4bd0-afe4-92f991e99a1a/volumes" Jan 27 15:15:07 crc kubenswrapper[4833]: I0127 15:15:07.210901 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:15:07 crc kubenswrapper[4833]: E0127 15:15:07.211376 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:15:22 crc kubenswrapper[4833]: I0127 15:15:22.210965 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:15:22 crc kubenswrapper[4833]: E0127 15:15:22.212519 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:15:36 crc kubenswrapper[4833]: I0127 15:15:36.211533 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:15:36 crc kubenswrapper[4833]: E0127 15:15:36.212414 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:15:48 crc kubenswrapper[4833]: I0127 15:15:48.211160 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:15:48 crc kubenswrapper[4833]: E0127 15:15:48.212085 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:15:56 crc kubenswrapper[4833]: I0127 15:15:56.736246 4833 scope.go:117] "RemoveContainer" containerID="70582989ec55b8d9aedd06d847cb7342b6378b266543f07ba14999161c062d1a" Jan 27 15:16:02 crc kubenswrapper[4833]: I0127 15:16:02.210784 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:16:02 crc kubenswrapper[4833]: E0127 15:16:02.211750 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:16:16 crc kubenswrapper[4833]: I0127 15:16:16.213796 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:16:16 crc kubenswrapper[4833]: E0127 15:16:16.215933 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:16:29 crc kubenswrapper[4833]: I0127 15:16:29.226503 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:16:29 crc kubenswrapper[4833]: E0127 15:16:29.227710 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:16:44 crc kubenswrapper[4833]: I0127 15:16:44.210981 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:16:44 crc kubenswrapper[4833]: E0127 15:16:44.211753 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:16:58 crc kubenswrapper[4833]: I0127 15:16:58.211317 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:16:58 crc kubenswrapper[4833]: E0127 15:16:58.212115 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:17:11 crc kubenswrapper[4833]: I0127 15:17:11.211183 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:17:11 crc kubenswrapper[4833]: E0127 15:17:11.211981 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:17:24 crc kubenswrapper[4833]: I0127 15:17:24.211054 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:17:24 crc kubenswrapper[4833]: E0127 15:17:24.212006 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:17:36 crc kubenswrapper[4833]: I0127 15:17:36.210766 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:17:36 crc kubenswrapper[4833]: I0127 15:17:36.590632 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"5adbb5b60f337898cf7069f76fd674a766cfb3dea006fa4a2c7bfb293e298f94"} Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.357326 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cttd6"] Jan 27 15:18:13 crc kubenswrapper[4833]: E0127 15:18:13.359125 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5002bc-279f-4942-8b5f-b4b7974680c8" containerName="collect-profiles" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.359175 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5002bc-279f-4942-8b5f-b4b7974680c8" containerName="collect-profiles" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.359563 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5002bc-279f-4942-8b5f-b4b7974680c8" containerName="collect-profiles" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.361255 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.372039 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cttd6"] Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.518583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-catalog-content\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.518697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/f86562ed-f798-4820-bc57-ed30a361ea3b-kube-api-access-mvcvv\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.518782 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-utilities\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.620527 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-catalog-content\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.620661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/f86562ed-f798-4820-bc57-ed30a361ea3b-kube-api-access-mvcvv\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.620747 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-utilities\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.621290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-utilities\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.621691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-catalog-content\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.642841 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/f86562ed-f798-4820-bc57-ed30a361ea3b-kube-api-access-mvcvv\") pod \"redhat-operators-cttd6\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:13 crc kubenswrapper[4833]: I0127 15:18:13.685718 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:14 crc kubenswrapper[4833]: I0127 15:18:14.217745 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cttd6"] Jan 27 15:18:14 crc kubenswrapper[4833]: I0127 15:18:14.952319 4833 generic.go:334] "Generic (PLEG): container finished" podID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerID="5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f" exitCode=0 Jan 27 15:18:14 crc kubenswrapper[4833]: I0127 15:18:14.952382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerDied","Data":"5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f"} Jan 27 15:18:14 crc kubenswrapper[4833]: I0127 15:18:14.952609 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerStarted","Data":"cba1b433256f4afeaa626cc725bbc295844996ccb921c84b7bdfb9d987332946"} Jan 27 15:18:14 crc kubenswrapper[4833]: I0127 15:18:14.954576 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:18:15 crc kubenswrapper[4833]: I0127 15:18:15.965019 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerStarted","Data":"93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b"} Jan 27 15:18:21 crc kubenswrapper[4833]: I0127 15:18:21.014307 4833 generic.go:334] "Generic (PLEG): container finished" podID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerID="93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b" exitCode=0 Jan 27 15:18:21 crc kubenswrapper[4833]: I0127 15:18:21.014403 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerDied","Data":"93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b"} Jan 27 15:18:22 crc kubenswrapper[4833]: I0127 15:18:22.026304 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerStarted","Data":"920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db"} Jan 27 15:18:22 crc kubenswrapper[4833]: I0127 15:18:22.047336 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cttd6" podStartSLOduration=2.5248020220000003 podStartE2EDuration="9.04731231s" podCreationTimestamp="2026-01-27 15:18:13 +0000 UTC" firstStartedPulling="2026-01-27 15:18:14.95428131 +0000 UTC m=+3996.605605712" lastFinishedPulling="2026-01-27 15:18:21.476791598 +0000 UTC m=+4003.128116000" observedRunningTime="2026-01-27 15:18:22.045752343 +0000 UTC m=+4003.697076765" watchObservedRunningTime="2026-01-27 15:18:22.04731231 +0000 UTC m=+4003.698636732" Jan 27 15:18:23 crc kubenswrapper[4833]: I0127 15:18:23.686843 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:23 crc kubenswrapper[4833]: I0127 15:18:23.687186 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:24 crc kubenswrapper[4833]: I0127 15:18:24.754061 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cttd6" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="registry-server" probeResult="failure" output=< Jan 27 15:18:24 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 15:18:24 crc kubenswrapper[4833]: > Jan 27 15:18:33 crc kubenswrapper[4833]: I0127 15:18:33.732009 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:33 crc kubenswrapper[4833]: I0127 15:18:33.779100 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:33 crc kubenswrapper[4833]: I0127 15:18:33.971309 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cttd6"] Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.143961 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cttd6" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="registry-server" containerID="cri-o://920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db" gracePeriod=2 Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.688260 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.791679 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-utilities\") pod \"f86562ed-f798-4820-bc57-ed30a361ea3b\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.791816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/f86562ed-f798-4820-bc57-ed30a361ea3b-kube-api-access-mvcvv\") pod \"f86562ed-f798-4820-bc57-ed30a361ea3b\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.792111 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-catalog-content\") pod \"f86562ed-f798-4820-bc57-ed30a361ea3b\" (UID: \"f86562ed-f798-4820-bc57-ed30a361ea3b\") " Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.792690 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-utilities" (OuterVolumeSpecName: "utilities") pod "f86562ed-f798-4820-bc57-ed30a361ea3b" (UID: "f86562ed-f798-4820-bc57-ed30a361ea3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.798733 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86562ed-f798-4820-bc57-ed30a361ea3b-kube-api-access-mvcvv" (OuterVolumeSpecName: "kube-api-access-mvcvv") pod "f86562ed-f798-4820-bc57-ed30a361ea3b" (UID: "f86562ed-f798-4820-bc57-ed30a361ea3b"). InnerVolumeSpecName "kube-api-access-mvcvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.894795 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.894829 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/f86562ed-f798-4820-bc57-ed30a361ea3b-kube-api-access-mvcvv\") on node \"crc\" DevicePath \"\"" Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.919130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f86562ed-f798-4820-bc57-ed30a361ea3b" (UID: "f86562ed-f798-4820-bc57-ed30a361ea3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:18:35 crc kubenswrapper[4833]: I0127 15:18:35.997331 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f86562ed-f798-4820-bc57-ed30a361ea3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.156562 4833 generic.go:334] "Generic (PLEG): container finished" podID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerID="920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db" exitCode=0 Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.156626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerDied","Data":"920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db"} Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.156739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cttd6" event={"ID":"f86562ed-f798-4820-bc57-ed30a361ea3b","Type":"ContainerDied","Data":"cba1b433256f4afeaa626cc725bbc295844996ccb921c84b7bdfb9d987332946"} Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.156762 4833 scope.go:117] "RemoveContainer" containerID="920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.158819 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cttd6" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.194728 4833 scope.go:117] "RemoveContainer" containerID="93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.198742 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cttd6"] Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.208350 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cttd6"] Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.219247 4833 scope.go:117] "RemoveContainer" containerID="5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.267772 4833 scope.go:117] "RemoveContainer" containerID="920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db" Jan 27 15:18:36 crc kubenswrapper[4833]: E0127 15:18:36.268233 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db\": container with ID starting with 920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db not found: ID does not exist" containerID="920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.268266 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db"} err="failed to get container status \"920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db\": rpc error: code = NotFound desc = could not find container \"920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db\": container with ID starting with 920d7ffddd95143ccfcfb48cf48abd1fb75b563954a9ea711e37e7b1614350db not found: ID does not exist" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.268285 4833 scope.go:117] "RemoveContainer" containerID="93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b" Jan 27 15:18:36 crc kubenswrapper[4833]: E0127 15:18:36.268667 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b\": container with ID starting with 93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b not found: ID does not exist" containerID="93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.268689 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b"} err="failed to get container status \"93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b\": rpc error: code = NotFound desc = could not find container \"93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b\": container with ID starting with 93cf6b4d9288a1adb5a7e6e7bf28ba6bf9d48dd0d0d1f60fa48f6488e712534b not found: ID does not exist" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.268701 4833 scope.go:117] "RemoveContainer" containerID="5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f" Jan 27 15:18:36 crc kubenswrapper[4833]: E0127 15:18:36.268925 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f\": container with ID starting with 5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f not found: ID does not exist" containerID="5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f" Jan 27 15:18:36 crc kubenswrapper[4833]: I0127 15:18:36.268945 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f"} err="failed to get container status \"5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f\": rpc error: code = NotFound desc = could not find container \"5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f\": container with ID starting with 5c786d2077b975051e04f5bb37edd792c2af4136cb8f00034f0a1bafb918c23f not found: ID does not exist" Jan 27 15:18:37 crc kubenswrapper[4833]: I0127 15:18:37.223403 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" path="/var/lib/kubelet/pods/f86562ed-f798-4820-bc57-ed30a361ea3b/volumes" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.023547 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qg9z4"] Jan 27 15:19:03 crc kubenswrapper[4833]: E0127 15:19:03.024369 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="extract-utilities" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.024381 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="extract-utilities" Jan 27 15:19:03 crc kubenswrapper[4833]: E0127 15:19:03.024391 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="registry-server" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.024398 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="registry-server" Jan 27 15:19:03 crc kubenswrapper[4833]: E0127 15:19:03.024417 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="extract-content" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.024423 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="extract-content" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.024660 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86562ed-f798-4820-bc57-ed30a361ea3b" containerName="registry-server" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.026342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.035731 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qg9z4"] Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.180294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rds\" (UniqueName: \"kubernetes.io/projected/3515ea2d-1706-47ae-b00b-14239f06e4c3-kube-api-access-p8rds\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.180354 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-utilities\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.180456 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-catalog-content\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.282043 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-utilities\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.282148 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-catalog-content\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.282279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rds\" (UniqueName: \"kubernetes.io/projected/3515ea2d-1706-47ae-b00b-14239f06e4c3-kube-api-access-p8rds\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.282699 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-utilities\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.282742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-catalog-content\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.306144 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rds\" (UniqueName: \"kubernetes.io/projected/3515ea2d-1706-47ae-b00b-14239f06e4c3-kube-api-access-p8rds\") pod \"certified-operators-qg9z4\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:03 crc kubenswrapper[4833]: I0127 15:19:03.348309 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:04 crc kubenswrapper[4833]: I0127 15:19:04.495479 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qg9z4"] Jan 27 15:19:05 crc kubenswrapper[4833]: I0127 15:19:05.447964 4833 generic.go:334] "Generic (PLEG): container finished" podID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerID="068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b" exitCode=0 Jan 27 15:19:05 crc kubenswrapper[4833]: I0127 15:19:05.448033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerDied","Data":"068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b"} Jan 27 15:19:05 crc kubenswrapper[4833]: I0127 15:19:05.448264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerStarted","Data":"1e60c27c73130f4ade34bed164156b29bc639af47584bab1bea7f445b0ee07c7"} Jan 27 15:19:06 crc kubenswrapper[4833]: I0127 15:19:06.459418 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerStarted","Data":"54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77"} Jan 27 15:19:07 crc kubenswrapper[4833]: I0127 15:19:07.473184 4833 generic.go:334] "Generic (PLEG): container finished" podID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerID="54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77" exitCode=0 Jan 27 15:19:07 crc kubenswrapper[4833]: I0127 15:19:07.473284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerDied","Data":"54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77"} Jan 27 15:19:08 crc kubenswrapper[4833]: I0127 15:19:08.485958 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerStarted","Data":"f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6"} Jan 27 15:19:08 crc kubenswrapper[4833]: I0127 15:19:08.514995 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qg9z4" podStartSLOduration=3.018457338 podStartE2EDuration="5.514975465s" podCreationTimestamp="2026-01-27 15:19:03 +0000 UTC" firstStartedPulling="2026-01-27 15:19:05.451224724 +0000 UTC m=+4047.102549126" lastFinishedPulling="2026-01-27 15:19:07.947742811 +0000 UTC m=+4049.599067253" observedRunningTime="2026-01-27 15:19:08.503951345 +0000 UTC m=+4050.155275747" watchObservedRunningTime="2026-01-27 15:19:08.514975465 +0000 UTC m=+4050.166299867" Jan 27 15:19:13 crc kubenswrapper[4833]: I0127 15:19:13.349109 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:13 crc kubenswrapper[4833]: I0127 15:19:13.349628 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:13 crc kubenswrapper[4833]: I0127 15:19:13.394568 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:13 crc kubenswrapper[4833]: I0127 15:19:13.581657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:13 crc kubenswrapper[4833]: I0127 15:19:13.631622 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qg9z4"] Jan 27 15:19:15 crc kubenswrapper[4833]: I0127 15:19:15.548059 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qg9z4" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="registry-server" containerID="cri-o://f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6" gracePeriod=2 Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.342839 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.470074 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rds\" (UniqueName: \"kubernetes.io/projected/3515ea2d-1706-47ae-b00b-14239f06e4c3-kube-api-access-p8rds\") pod \"3515ea2d-1706-47ae-b00b-14239f06e4c3\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.470139 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-catalog-content\") pod \"3515ea2d-1706-47ae-b00b-14239f06e4c3\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.470302 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-utilities\") pod \"3515ea2d-1706-47ae-b00b-14239f06e4c3\" (UID: \"3515ea2d-1706-47ae-b00b-14239f06e4c3\") " Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.471526 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-utilities" (OuterVolumeSpecName: "utilities") pod "3515ea2d-1706-47ae-b00b-14239f06e4c3" (UID: "3515ea2d-1706-47ae-b00b-14239f06e4c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.482940 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3515ea2d-1706-47ae-b00b-14239f06e4c3-kube-api-access-p8rds" (OuterVolumeSpecName: "kube-api-access-p8rds") pod "3515ea2d-1706-47ae-b00b-14239f06e4c3" (UID: "3515ea2d-1706-47ae-b00b-14239f06e4c3"). InnerVolumeSpecName "kube-api-access-p8rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.565245 4833 generic.go:334] "Generic (PLEG): container finished" podID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerID="f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6" exitCode=0 Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.565301 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerDied","Data":"f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6"} Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.565332 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qg9z4" event={"ID":"3515ea2d-1706-47ae-b00b-14239f06e4c3","Type":"ContainerDied","Data":"1e60c27c73130f4ade34bed164156b29bc639af47584bab1bea7f445b0ee07c7"} Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.565353 4833 scope.go:117] "RemoveContainer" containerID="f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.565570 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qg9z4" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.574180 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rds\" (UniqueName: \"kubernetes.io/projected/3515ea2d-1706-47ae-b00b-14239f06e4c3-kube-api-access-p8rds\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.574210 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.586303 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3515ea2d-1706-47ae-b00b-14239f06e4c3" (UID: "3515ea2d-1706-47ae-b00b-14239f06e4c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.599269 4833 scope.go:117] "RemoveContainer" containerID="54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.620689 4833 scope.go:117] "RemoveContainer" containerID="068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.676603 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3515ea2d-1706-47ae-b00b-14239f06e4c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.677263 4833 scope.go:117] "RemoveContainer" containerID="f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6" Jan 27 15:19:16 crc kubenswrapper[4833]: E0127 15:19:16.677796 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6\": container with ID starting with f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6 not found: ID does not exist" containerID="f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.677847 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6"} err="failed to get container status \"f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6\": rpc error: code = NotFound desc = could not find container \"f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6\": container with ID starting with f5cf94b10a67b0955b62e7d1b8049cdf86273557e8032352077a0114d298fcf6 not found: ID does not exist" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.677885 4833 scope.go:117] "RemoveContainer" containerID="54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77" Jan 27 15:19:16 crc kubenswrapper[4833]: E0127 15:19:16.678233 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77\": container with ID starting with 54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77 not found: ID does not exist" containerID="54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.678268 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77"} err="failed to get container status \"54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77\": rpc error: code = NotFound desc = could not find container \"54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77\": container with ID starting with 54cbd1d85a3ef31853eeabb0aec4ad65bde4646ea1bd1f07fe380bfa0bbade77 not found: ID does not exist" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.678288 4833 scope.go:117] "RemoveContainer" containerID="068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b" Jan 27 15:19:16 crc kubenswrapper[4833]: E0127 15:19:16.678634 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b\": container with ID starting with 068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b not found: ID does not exist" containerID="068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.678689 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b"} err="failed to get container status \"068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b\": rpc error: code = NotFound desc = could not find container \"068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b\": container with ID starting with 068762ace54d91cd676f041258daeec41538859c99cd4f1f7f1e9235f4d53d5b not found: ID does not exist" Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.896893 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qg9z4"] Jan 27 15:19:16 crc kubenswrapper[4833]: I0127 15:19:16.906627 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qg9z4"] Jan 27 15:19:17 crc kubenswrapper[4833]: I0127 15:19:17.221960 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" path="/var/lib/kubelet/pods/3515ea2d-1706-47ae-b00b-14239f06e4c3/volumes" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.069776 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5zq89"] Jan 27 15:19:44 crc kubenswrapper[4833]: E0127 15:19:44.070764 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="extract-utilities" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.070780 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="extract-utilities" Jan 27 15:19:44 crc kubenswrapper[4833]: E0127 15:19:44.070798 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="extract-content" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.070808 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="extract-content" Jan 27 15:19:44 crc kubenswrapper[4833]: E0127 15:19:44.070851 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="registry-server" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.070869 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="registry-server" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.071194 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3515ea2d-1706-47ae-b00b-14239f06e4c3" containerName="registry-server" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.073710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.101847 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zq89"] Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.148380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-utilities\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.148488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dnxn\" (UniqueName: \"kubernetes.io/projected/a7cfc240-9b99-4716-a48d-0989d399c7a4-kube-api-access-9dnxn\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.148839 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-catalog-content\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.250595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dnxn\" (UniqueName: \"kubernetes.io/projected/a7cfc240-9b99-4716-a48d-0989d399c7a4-kube-api-access-9dnxn\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.250743 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-catalog-content\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.250787 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-utilities\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.251293 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-utilities\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.251398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-catalog-content\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.281685 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dnxn\" (UniqueName: \"kubernetes.io/projected/a7cfc240-9b99-4716-a48d-0989d399c7a4-kube-api-access-9dnxn\") pod \"community-operators-5zq89\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.413329 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:44 crc kubenswrapper[4833]: I0127 15:19:44.981902 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zq89"] Jan 27 15:19:44 crc kubenswrapper[4833]: W0127 15:19:44.993627 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7cfc240_9b99_4716_a48d_0989d399c7a4.slice/crio-9e9f2087e5f61dfd45112b4c1eb1cb0c84d4c13b877a04fdfaf911dd0fccbab5 WatchSource:0}: Error finding container 9e9f2087e5f61dfd45112b4c1eb1cb0c84d4c13b877a04fdfaf911dd0fccbab5: Status 404 returned error can't find the container with id 9e9f2087e5f61dfd45112b4c1eb1cb0c84d4c13b877a04fdfaf911dd0fccbab5 Jan 27 15:19:45 crc kubenswrapper[4833]: I0127 15:19:45.848794 4833 generic.go:334] "Generic (PLEG): container finished" podID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerID="5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89" exitCode=0 Jan 27 15:19:45 crc kubenswrapper[4833]: I0127 15:19:45.848894 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zq89" event={"ID":"a7cfc240-9b99-4716-a48d-0989d399c7a4","Type":"ContainerDied","Data":"5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89"} Jan 27 15:19:45 crc kubenswrapper[4833]: I0127 15:19:45.849121 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zq89" event={"ID":"a7cfc240-9b99-4716-a48d-0989d399c7a4","Type":"ContainerStarted","Data":"9e9f2087e5f61dfd45112b4c1eb1cb0c84d4c13b877a04fdfaf911dd0fccbab5"} Jan 27 15:19:47 crc kubenswrapper[4833]: I0127 15:19:47.871922 4833 generic.go:334] "Generic (PLEG): container finished" podID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerID="25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465" exitCode=0 Jan 27 15:19:47 crc kubenswrapper[4833]: I0127 15:19:47.871984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zq89" event={"ID":"a7cfc240-9b99-4716-a48d-0989d399c7a4","Type":"ContainerDied","Data":"25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465"} Jan 27 15:19:49 crc kubenswrapper[4833]: I0127 15:19:49.898805 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zq89" event={"ID":"a7cfc240-9b99-4716-a48d-0989d399c7a4","Type":"ContainerStarted","Data":"14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1"} Jan 27 15:19:49 crc kubenswrapper[4833]: I0127 15:19:49.923480 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5zq89" podStartSLOduration=3.230976614 podStartE2EDuration="5.923459121s" podCreationTimestamp="2026-01-27 15:19:44 +0000 UTC" firstStartedPulling="2026-01-27 15:19:45.850215054 +0000 UTC m=+4087.501539456" lastFinishedPulling="2026-01-27 15:19:48.542697551 +0000 UTC m=+4090.194021963" observedRunningTime="2026-01-27 15:19:49.918410402 +0000 UTC m=+4091.569734804" watchObservedRunningTime="2026-01-27 15:19:49.923459121 +0000 UTC m=+4091.574783523" Jan 27 15:19:54 crc kubenswrapper[4833]: I0127 15:19:54.414473 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:54 crc kubenswrapper[4833]: I0127 15:19:54.415006 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:54 crc kubenswrapper[4833]: I0127 15:19:54.462659 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:55 crc kubenswrapper[4833]: I0127 15:19:55.027424 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:55 crc kubenswrapper[4833]: I0127 15:19:55.079985 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zq89"] Jan 27 15:19:56 crc kubenswrapper[4833]: I0127 15:19:56.971215 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5zq89" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="registry-server" containerID="cri-o://14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1" gracePeriod=2 Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.433945 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.541198 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-utilities\") pod \"a7cfc240-9b99-4716-a48d-0989d399c7a4\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.541914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-catalog-content\") pod \"a7cfc240-9b99-4716-a48d-0989d399c7a4\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.542174 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-utilities" (OuterVolumeSpecName: "utilities") pod "a7cfc240-9b99-4716-a48d-0989d399c7a4" (UID: "a7cfc240-9b99-4716-a48d-0989d399c7a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.542293 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dnxn\" (UniqueName: \"kubernetes.io/projected/a7cfc240-9b99-4716-a48d-0989d399c7a4-kube-api-access-9dnxn\") pod \"a7cfc240-9b99-4716-a48d-0989d399c7a4\" (UID: \"a7cfc240-9b99-4716-a48d-0989d399c7a4\") " Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.543042 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.551045 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cfc240-9b99-4716-a48d-0989d399c7a4-kube-api-access-9dnxn" (OuterVolumeSpecName: "kube-api-access-9dnxn") pod "a7cfc240-9b99-4716-a48d-0989d399c7a4" (UID: "a7cfc240-9b99-4716-a48d-0989d399c7a4"). InnerVolumeSpecName "kube-api-access-9dnxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.644740 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dnxn\" (UniqueName: \"kubernetes.io/projected/a7cfc240-9b99-4716-a48d-0989d399c7a4-kube-api-access-9dnxn\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.982079 4833 generic.go:334] "Generic (PLEG): container finished" podID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerID="14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1" exitCode=0 Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.982131 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zq89" event={"ID":"a7cfc240-9b99-4716-a48d-0989d399c7a4","Type":"ContainerDied","Data":"14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1"} Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.982170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zq89" event={"ID":"a7cfc240-9b99-4716-a48d-0989d399c7a4","Type":"ContainerDied","Data":"9e9f2087e5f61dfd45112b4c1eb1cb0c84d4c13b877a04fdfaf911dd0fccbab5"} Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.982197 4833 scope.go:117] "RemoveContainer" containerID="14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1" Jan 27 15:19:57 crc kubenswrapper[4833]: I0127 15:19:57.982203 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zq89" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.003581 4833 scope.go:117] "RemoveContainer" containerID="25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.028035 4833 scope.go:117] "RemoveContainer" containerID="5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.088009 4833 scope.go:117] "RemoveContainer" containerID="14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1" Jan 27 15:19:58 crc kubenswrapper[4833]: E0127 15:19:58.088735 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1\": container with ID starting with 14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1 not found: ID does not exist" containerID="14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.088818 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1"} err="failed to get container status \"14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1\": rpc error: code = NotFound desc = could not find container \"14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1\": container with ID starting with 14c18fbd24d94bf19ec376b5038d83d0574ef3bb3ff6d0d63318a133fbc716b1 not found: ID does not exist" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.088886 4833 scope.go:117] "RemoveContainer" containerID="25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465" Jan 27 15:19:58 crc kubenswrapper[4833]: E0127 15:19:58.089420 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465\": container with ID starting with 25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465 not found: ID does not exist" containerID="25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.089496 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465"} err="failed to get container status \"25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465\": rpc error: code = NotFound desc = could not find container \"25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465\": container with ID starting with 25299fd8d3f129a7a4cb019438dcfd9f6ec6c45d6e202333ccc026aecf3ab465 not found: ID does not exist" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.089533 4833 scope.go:117] "RemoveContainer" containerID="5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89" Jan 27 15:19:58 crc kubenswrapper[4833]: E0127 15:19:58.090090 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89\": container with ID starting with 5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89 not found: ID does not exist" containerID="5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.090160 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89"} err="failed to get container status \"5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89\": rpc error: code = NotFound desc = could not find container \"5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89\": container with ID starting with 5721539ff015fc5b6ab80a4f87dd263a8a2f466824febd0690f2c4100974dc89 not found: ID does not exist" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.298299 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7cfc240-9b99-4716-a48d-0989d399c7a4" (UID: "a7cfc240-9b99-4716-a48d-0989d399c7a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.359547 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7cfc240-9b99-4716-a48d-0989d399c7a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.644893 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zq89"] Jan 27 15:19:58 crc kubenswrapper[4833]: I0127 15:19:58.656185 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5zq89"] Jan 27 15:19:59 crc kubenswrapper[4833]: I0127 15:19:59.224138 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" path="/var/lib/kubelet/pods/a7cfc240-9b99-4716-a48d-0989d399c7a4/volumes" Jan 27 15:20:02 crc kubenswrapper[4833]: I0127 15:20:02.260542 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:20:02 crc kubenswrapper[4833]: I0127 15:20:02.261145 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:20:32 crc kubenswrapper[4833]: I0127 15:20:32.261026 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:20:32 crc kubenswrapper[4833]: I0127 15:20:32.261654 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.260898 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.262728 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.262801 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.263785 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5adbb5b60f337898cf7069f76fd674a766cfb3dea006fa4a2c7bfb293e298f94"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.263865 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://5adbb5b60f337898cf7069f76fd674a766cfb3dea006fa4a2c7bfb293e298f94" gracePeriod=600 Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.649059 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="5adbb5b60f337898cf7069f76fd674a766cfb3dea006fa4a2c7bfb293e298f94" exitCode=0 Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.649107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"5adbb5b60f337898cf7069f76fd674a766cfb3dea006fa4a2c7bfb293e298f94"} Jan 27 15:21:02 crc kubenswrapper[4833]: I0127 15:21:02.649144 4833 scope.go:117] "RemoveContainer" containerID="6ec95f8e07647723ed5bb6901e170601be935fc474f0354bc0360cf8911618dc" Jan 27 15:21:03 crc kubenswrapper[4833]: I0127 15:21:03.659763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535"} Jan 27 15:23:02 crc kubenswrapper[4833]: I0127 15:23:02.260773 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:23:02 crc kubenswrapper[4833]: I0127 15:23:02.261243 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.780813 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h7z8z"] Jan 27 15:23:03 crc kubenswrapper[4833]: E0127 15:23:03.781669 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="extract-content" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.781689 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="extract-content" Jan 27 15:23:03 crc kubenswrapper[4833]: E0127 15:23:03.781707 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="extract-utilities" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.781716 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="extract-utilities" Jan 27 15:23:03 crc kubenswrapper[4833]: E0127 15:23:03.781744 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="registry-server" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.781753 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="registry-server" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.781974 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cfc240-9b99-4716-a48d-0989d399c7a4" containerName="registry-server" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.783777 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.808512 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h7z8z"] Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.812156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-utilities\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.812467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7hx\" (UniqueName: \"kubernetes.io/projected/94263ca5-41d1-44f8-8934-24822e945509-kube-api-access-wc7hx\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.812737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-catalog-content\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.914361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc7hx\" (UniqueName: \"kubernetes.io/projected/94263ca5-41d1-44f8-8934-24822e945509-kube-api-access-wc7hx\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.914526 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-catalog-content\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.914617 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-utilities\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.915151 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-utilities\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.915315 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-catalog-content\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:03 crc kubenswrapper[4833]: I0127 15:23:03.943037 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc7hx\" (UniqueName: \"kubernetes.io/projected/94263ca5-41d1-44f8-8934-24822e945509-kube-api-access-wc7hx\") pod \"redhat-marketplace-h7z8z\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:04 crc kubenswrapper[4833]: I0127 15:23:04.102333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:04 crc kubenswrapper[4833]: I0127 15:23:04.621845 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h7z8z"] Jan 27 15:23:05 crc kubenswrapper[4833]: I0127 15:23:05.044210 4833 generic.go:334] "Generic (PLEG): container finished" podID="94263ca5-41d1-44f8-8934-24822e945509" containerID="a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719" exitCode=0 Jan 27 15:23:05 crc kubenswrapper[4833]: I0127 15:23:05.044549 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h7z8z" event={"ID":"94263ca5-41d1-44f8-8934-24822e945509","Type":"ContainerDied","Data":"a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719"} Jan 27 15:23:05 crc kubenswrapper[4833]: I0127 15:23:05.044580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h7z8z" event={"ID":"94263ca5-41d1-44f8-8934-24822e945509","Type":"ContainerStarted","Data":"df93ed2abdb228b4d99232a5050e90d9c5780b350ec6294d840b28dcb5b00614"} Jan 27 15:23:07 crc kubenswrapper[4833]: I0127 15:23:07.071001 4833 generic.go:334] "Generic (PLEG): container finished" podID="94263ca5-41d1-44f8-8934-24822e945509" containerID="b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095" exitCode=0 Jan 27 15:23:07 crc kubenswrapper[4833]: I0127 15:23:07.071163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h7z8z" event={"ID":"94263ca5-41d1-44f8-8934-24822e945509","Type":"ContainerDied","Data":"b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095"} Jan 27 15:23:08 crc kubenswrapper[4833]: I0127 15:23:08.086775 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h7z8z" event={"ID":"94263ca5-41d1-44f8-8934-24822e945509","Type":"ContainerStarted","Data":"e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00"} Jan 27 15:23:08 crc kubenswrapper[4833]: I0127 15:23:08.114625 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h7z8z" podStartSLOduration=2.6075106359999998 podStartE2EDuration="5.114602484s" podCreationTimestamp="2026-01-27 15:23:03 +0000 UTC" firstStartedPulling="2026-01-27 15:23:05.045830846 +0000 UTC m=+4286.697155258" lastFinishedPulling="2026-01-27 15:23:07.552922704 +0000 UTC m=+4289.204247106" observedRunningTime="2026-01-27 15:23:08.106838387 +0000 UTC m=+4289.758162829" watchObservedRunningTime="2026-01-27 15:23:08.114602484 +0000 UTC m=+4289.765926886" Jan 27 15:23:14 crc kubenswrapper[4833]: I0127 15:23:14.103046 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:14 crc kubenswrapper[4833]: I0127 15:23:14.103620 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:14 crc kubenswrapper[4833]: I0127 15:23:14.180815 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:14 crc kubenswrapper[4833]: I0127 15:23:14.236133 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:14 crc kubenswrapper[4833]: I0127 15:23:14.416213 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h7z8z"] Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.181037 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h7z8z" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="registry-server" containerID="cri-o://e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00" gracePeriod=2 Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.715413 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.815681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc7hx\" (UniqueName: \"kubernetes.io/projected/94263ca5-41d1-44f8-8934-24822e945509-kube-api-access-wc7hx\") pod \"94263ca5-41d1-44f8-8934-24822e945509\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.815785 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-catalog-content\") pod \"94263ca5-41d1-44f8-8934-24822e945509\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.815852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-utilities\") pod \"94263ca5-41d1-44f8-8934-24822e945509\" (UID: \"94263ca5-41d1-44f8-8934-24822e945509\") " Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.817496 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-utilities" (OuterVolumeSpecName: "utilities") pod "94263ca5-41d1-44f8-8934-24822e945509" (UID: "94263ca5-41d1-44f8-8934-24822e945509"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.825727 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94263ca5-41d1-44f8-8934-24822e945509-kube-api-access-wc7hx" (OuterVolumeSpecName: "kube-api-access-wc7hx") pod "94263ca5-41d1-44f8-8934-24822e945509" (UID: "94263ca5-41d1-44f8-8934-24822e945509"). InnerVolumeSpecName "kube-api-access-wc7hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.860331 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94263ca5-41d1-44f8-8934-24822e945509" (UID: "94263ca5-41d1-44f8-8934-24822e945509"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.918627 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.918658 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94263ca5-41d1-44f8-8934-24822e945509-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:16 crc kubenswrapper[4833]: I0127 15:23:16.918667 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc7hx\" (UniqueName: \"kubernetes.io/projected/94263ca5-41d1-44f8-8934-24822e945509-kube-api-access-wc7hx\") on node \"crc\" DevicePath \"\"" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.192412 4833 generic.go:334] "Generic (PLEG): container finished" podID="94263ca5-41d1-44f8-8934-24822e945509" containerID="e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00" exitCode=0 Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.192483 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h7z8z" event={"ID":"94263ca5-41d1-44f8-8934-24822e945509","Type":"ContainerDied","Data":"e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00"} Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.192792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h7z8z" event={"ID":"94263ca5-41d1-44f8-8934-24822e945509","Type":"ContainerDied","Data":"df93ed2abdb228b4d99232a5050e90d9c5780b350ec6294d840b28dcb5b00614"} Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.192816 4833 scope.go:117] "RemoveContainer" containerID="e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.192516 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h7z8z" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.231953 4833 scope.go:117] "RemoveContainer" containerID="b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.254640 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h7z8z"] Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.262372 4833 scope.go:117] "RemoveContainer" containerID="a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.265115 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h7z8z"] Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.312514 4833 scope.go:117] "RemoveContainer" containerID="e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00" Jan 27 15:23:17 crc kubenswrapper[4833]: E0127 15:23:17.312991 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00\": container with ID starting with e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00 not found: ID does not exist" containerID="e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.313022 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00"} err="failed to get container status \"e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00\": rpc error: code = NotFound desc = could not find container \"e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00\": container with ID starting with e3947dd0727aa07fb04e18df7275cd35c4991cd7f13d6d743644517dbf23db00 not found: ID does not exist" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.313042 4833 scope.go:117] "RemoveContainer" containerID="b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095" Jan 27 15:23:17 crc kubenswrapper[4833]: E0127 15:23:17.313556 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095\": container with ID starting with b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095 not found: ID does not exist" containerID="b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.313579 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095"} err="failed to get container status \"b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095\": rpc error: code = NotFound desc = could not find container \"b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095\": container with ID starting with b8c9a0f904da72ba72d8d1efe3dbd854985c773268c98fcdb6b4938d37b2f095 not found: ID does not exist" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.313592 4833 scope.go:117] "RemoveContainer" containerID="a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719" Jan 27 15:23:17 crc kubenswrapper[4833]: E0127 15:23:17.314002 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719\": container with ID starting with a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719 not found: ID does not exist" containerID="a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719" Jan 27 15:23:17 crc kubenswrapper[4833]: I0127 15:23:17.314022 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719"} err="failed to get container status \"a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719\": rpc error: code = NotFound desc = could not find container \"a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719\": container with ID starting with a0f7534491715424a05f6dfb700ea27592c75c3b26f738ad0350264de42df719 not found: ID does not exist" Jan 27 15:23:19 crc kubenswrapper[4833]: I0127 15:23:19.960898 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94263ca5-41d1-44f8-8934-24822e945509" path="/var/lib/kubelet/pods/94263ca5-41d1-44f8-8934-24822e945509/volumes" Jan 27 15:23:32 crc kubenswrapper[4833]: I0127 15:23:32.261697 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:23:32 crc kubenswrapper[4833]: I0127 15:23:32.262250 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.261296 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.261905 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.261954 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.262922 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.263102 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" gracePeriod=600 Jan 27 15:24:02 crc kubenswrapper[4833]: E0127 15:24:02.383210 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.391195 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" exitCode=0 Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.391258 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535"} Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.391291 4833 scope.go:117] "RemoveContainer" containerID="5adbb5b60f337898cf7069f76fd674a766cfb3dea006fa4a2c7bfb293e298f94" Jan 27 15:24:02 crc kubenswrapper[4833]: I0127 15:24:02.394111 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:24:02 crc kubenswrapper[4833]: E0127 15:24:02.394724 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:24:14 crc kubenswrapper[4833]: I0127 15:24:14.211189 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:24:14 crc kubenswrapper[4833]: E0127 15:24:14.212120 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:24:25 crc kubenswrapper[4833]: I0127 15:24:25.211480 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:24:25 crc kubenswrapper[4833]: E0127 15:24:25.212671 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:24:37 crc kubenswrapper[4833]: I0127 15:24:37.212189 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:24:37 crc kubenswrapper[4833]: E0127 15:24:37.213484 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:24:49 crc kubenswrapper[4833]: I0127 15:24:49.219071 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:24:49 crc kubenswrapper[4833]: E0127 15:24:49.220080 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:25:04 crc kubenswrapper[4833]: I0127 15:25:04.211100 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:25:04 crc kubenswrapper[4833]: E0127 15:25:04.212438 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:25:17 crc kubenswrapper[4833]: I0127 15:25:17.211060 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:25:17 crc kubenswrapper[4833]: E0127 15:25:17.211854 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:25:28 crc kubenswrapper[4833]: I0127 15:25:28.210851 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:25:28 crc kubenswrapper[4833]: E0127 15:25:28.213019 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:25:42 crc kubenswrapper[4833]: I0127 15:25:42.211098 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:25:42 crc kubenswrapper[4833]: E0127 15:25:42.212123 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:25:57 crc kubenswrapper[4833]: I0127 15:25:57.211030 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:25:57 crc kubenswrapper[4833]: E0127 15:25:57.212036 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:26:10 crc kubenswrapper[4833]: I0127 15:26:10.210735 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:26:10 crc kubenswrapper[4833]: E0127 15:26:10.211788 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:26:23 crc kubenswrapper[4833]: I0127 15:26:23.210923 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:26:23 crc kubenswrapper[4833]: E0127 15:26:23.211593 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:26:38 crc kubenswrapper[4833]: I0127 15:26:38.211590 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:26:38 crc kubenswrapper[4833]: E0127 15:26:38.212921 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:26:50 crc kubenswrapper[4833]: I0127 15:26:50.211535 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:26:50 crc kubenswrapper[4833]: E0127 15:26:50.212285 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:27:01 crc kubenswrapper[4833]: I0127 15:27:01.210614 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:27:01 crc kubenswrapper[4833]: E0127 15:27:01.211394 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:27:13 crc kubenswrapper[4833]: I0127 15:27:13.211012 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:27:13 crc kubenswrapper[4833]: E0127 15:27:13.211903 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:27:27 crc kubenswrapper[4833]: I0127 15:27:27.210569 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:27:27 crc kubenswrapper[4833]: E0127 15:27:27.212381 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:27:41 crc kubenswrapper[4833]: I0127 15:27:41.211023 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:27:41 crc kubenswrapper[4833]: E0127 15:27:41.211731 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:27:52 crc kubenswrapper[4833]: I0127 15:27:52.211774 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:27:52 crc kubenswrapper[4833]: E0127 15:27:52.212749 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:28:06 crc kubenswrapper[4833]: I0127 15:28:06.211420 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:28:06 crc kubenswrapper[4833]: E0127 15:28:06.212212 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:28:20 crc kubenswrapper[4833]: I0127 15:28:20.210682 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:28:20 crc kubenswrapper[4833]: E0127 15:28:20.211488 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:28:33 crc kubenswrapper[4833]: I0127 15:28:33.210890 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:28:33 crc kubenswrapper[4833]: E0127 15:28:33.211739 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:28:47 crc kubenswrapper[4833]: I0127 15:28:47.211045 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:28:47 crc kubenswrapper[4833]: E0127 15:28:47.211828 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:29:01 crc kubenswrapper[4833]: I0127 15:29:01.210504 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:29:01 crc kubenswrapper[4833]: E0127 15:29:01.211549 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:29:14 crc kubenswrapper[4833]: I0127 15:29:14.210780 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:29:15 crc kubenswrapper[4833]: I0127 15:29:15.141615 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"08531c2a732e5c21aad2303180cc67357633669017fcf13dd509c2e7e2e8c97e"} Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.184237 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hkrj9"] Jan 27 15:29:38 crc kubenswrapper[4833]: E0127 15:29:38.185156 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="registry-server" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.185169 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="registry-server" Jan 27 15:29:38 crc kubenswrapper[4833]: E0127 15:29:38.185193 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="extract-content" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.185199 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="extract-content" Jan 27 15:29:38 crc kubenswrapper[4833]: E0127 15:29:38.185219 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="extract-utilities" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.185225 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="extract-utilities" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.185411 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="94263ca5-41d1-44f8-8934-24822e945509" containerName="registry-server" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.186777 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.197344 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hkrj9"] Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.269074 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-utilities\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.269261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb22w\" (UniqueName: \"kubernetes.io/projected/e8c8e713-1e07-44da-990e-993e1018c903-kube-api-access-vb22w\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.269319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-catalog-content\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.370712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-catalog-content\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.370895 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-utilities\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.370993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb22w\" (UniqueName: \"kubernetes.io/projected/e8c8e713-1e07-44da-990e-993e1018c903-kube-api-access-vb22w\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.371245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-catalog-content\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.371658 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-utilities\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.394557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb22w\" (UniqueName: \"kubernetes.io/projected/e8c8e713-1e07-44da-990e-993e1018c903-kube-api-access-vb22w\") pod \"certified-operators-hkrj9\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:38 crc kubenswrapper[4833]: I0127 15:29:38.511421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:39 crc kubenswrapper[4833]: I0127 15:29:39.116050 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hkrj9"] Jan 27 15:29:39 crc kubenswrapper[4833]: I0127 15:29:39.359547 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8c8e713-1e07-44da-990e-993e1018c903" containerID="9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2" exitCode=0 Jan 27 15:29:39 crc kubenswrapper[4833]: I0127 15:29:39.359652 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkrj9" event={"ID":"e8c8e713-1e07-44da-990e-993e1018c903","Type":"ContainerDied","Data":"9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2"} Jan 27 15:29:39 crc kubenswrapper[4833]: I0127 15:29:39.359993 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkrj9" event={"ID":"e8c8e713-1e07-44da-990e-993e1018c903","Type":"ContainerStarted","Data":"5c5c80a00be086cbbfe7234cc22d648e92aa8847132f902e6290783056c2fa21"} Jan 27 15:29:39 crc kubenswrapper[4833]: I0127 15:29:39.362707 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.381539 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8c8e713-1e07-44da-990e-993e1018c903" containerID="6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6" exitCode=0 Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.381626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkrj9" event={"ID":"e8c8e713-1e07-44da-990e-993e1018c903","Type":"ContainerDied","Data":"6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6"} Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.386440 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kspkn"] Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.388891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.397662 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kspkn"] Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.532580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-utilities\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.532696 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-catalog-content\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.532787 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkm7k\" (UniqueName: \"kubernetes.io/projected/ca46096d-c85e-4879-ac3d-53dd185b944c-kube-api-access-wkm7k\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.635726 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-utilities\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.635795 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-catalog-content\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.635843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkm7k\" (UniqueName: \"kubernetes.io/projected/ca46096d-c85e-4879-ac3d-53dd185b944c-kube-api-access-wkm7k\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.636353 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-utilities\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.636677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-catalog-content\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.669608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkm7k\" (UniqueName: \"kubernetes.io/projected/ca46096d-c85e-4879-ac3d-53dd185b944c-kube-api-access-wkm7k\") pod \"redhat-operators-kspkn\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:41 crc kubenswrapper[4833]: I0127 15:29:41.709157 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:29:42 crc kubenswrapper[4833]: I0127 15:29:42.237596 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kspkn"] Jan 27 15:29:42 crc kubenswrapper[4833]: W0127 15:29:42.244729 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca46096d_c85e_4879_ac3d_53dd185b944c.slice/crio-16f270d43dc9e58ceee7363f69b3db90281efec92063a920b04c5dcf0a8ffb07 WatchSource:0}: Error finding container 16f270d43dc9e58ceee7363f69b3db90281efec92063a920b04c5dcf0a8ffb07: Status 404 returned error can't find the container with id 16f270d43dc9e58ceee7363f69b3db90281efec92063a920b04c5dcf0a8ffb07 Jan 27 15:29:42 crc kubenswrapper[4833]: I0127 15:29:42.394059 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkrj9" event={"ID":"e8c8e713-1e07-44da-990e-993e1018c903","Type":"ContainerStarted","Data":"084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059"} Jan 27 15:29:42 crc kubenswrapper[4833]: I0127 15:29:42.398355 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerStarted","Data":"16f270d43dc9e58ceee7363f69b3db90281efec92063a920b04c5dcf0a8ffb07"} Jan 27 15:29:42 crc kubenswrapper[4833]: I0127 15:29:42.415019 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hkrj9" podStartSLOduration=1.727038303 podStartE2EDuration="4.414998365s" podCreationTimestamp="2026-01-27 15:29:38 +0000 UTC" firstStartedPulling="2026-01-27 15:29:39.362394727 +0000 UTC m=+4681.013719129" lastFinishedPulling="2026-01-27 15:29:42.050354789 +0000 UTC m=+4683.701679191" observedRunningTime="2026-01-27 15:29:42.411928341 +0000 UTC m=+4684.063252743" watchObservedRunningTime="2026-01-27 15:29:42.414998365 +0000 UTC m=+4684.066322767" Jan 27 15:29:43 crc kubenswrapper[4833]: I0127 15:29:43.410374 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerID="660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec" exitCode=0 Jan 27 15:29:43 crc kubenswrapper[4833]: I0127 15:29:43.410586 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerDied","Data":"660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec"} Jan 27 15:29:45 crc kubenswrapper[4833]: I0127 15:29:45.429868 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerStarted","Data":"48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84"} Jan 27 15:29:48 crc kubenswrapper[4833]: I0127 15:29:48.512612 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:48 crc kubenswrapper[4833]: I0127 15:29:48.525643 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:48 crc kubenswrapper[4833]: I0127 15:29:48.562583 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:49 crc kubenswrapper[4833]: I0127 15:29:49.525381 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:52 crc kubenswrapper[4833]: I0127 15:29:52.576804 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hkrj9"] Jan 27 15:29:52 crc kubenswrapper[4833]: I0127 15:29:52.577193 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hkrj9" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="registry-server" containerID="cri-o://084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059" gracePeriod=2 Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.116003 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.283800 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-catalog-content\") pod \"e8c8e713-1e07-44da-990e-993e1018c903\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.283969 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-utilities\") pod \"e8c8e713-1e07-44da-990e-993e1018c903\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.284116 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb22w\" (UniqueName: \"kubernetes.io/projected/e8c8e713-1e07-44da-990e-993e1018c903-kube-api-access-vb22w\") pod \"e8c8e713-1e07-44da-990e-993e1018c903\" (UID: \"e8c8e713-1e07-44da-990e-993e1018c903\") " Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.286028 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-utilities" (OuterVolumeSpecName: "utilities") pod "e8c8e713-1e07-44da-990e-993e1018c903" (UID: "e8c8e713-1e07-44da-990e-993e1018c903"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.299279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c8e713-1e07-44da-990e-993e1018c903-kube-api-access-vb22w" (OuterVolumeSpecName: "kube-api-access-vb22w") pod "e8c8e713-1e07-44da-990e-993e1018c903" (UID: "e8c8e713-1e07-44da-990e-993e1018c903"). InnerVolumeSpecName "kube-api-access-vb22w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.331054 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c8e713-1e07-44da-990e-993e1018c903" (UID: "e8c8e713-1e07-44da-990e-993e1018c903"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.389462 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb22w\" (UniqueName: \"kubernetes.io/projected/e8c8e713-1e07-44da-990e-993e1018c903-kube-api-access-vb22w\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.389505 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.389517 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c8e713-1e07-44da-990e-993e1018c903-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.503759 4833 generic.go:334] "Generic (PLEG): container finished" podID="e8c8e713-1e07-44da-990e-993e1018c903" containerID="084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059" exitCode=0 Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.503804 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkrj9" event={"ID":"e8c8e713-1e07-44da-990e-993e1018c903","Type":"ContainerDied","Data":"084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059"} Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.503829 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hkrj9" event={"ID":"e8c8e713-1e07-44da-990e-993e1018c903","Type":"ContainerDied","Data":"5c5c80a00be086cbbfe7234cc22d648e92aa8847132f902e6290783056c2fa21"} Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.503846 4833 scope.go:117] "RemoveContainer" containerID="084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.503966 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hkrj9" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.528420 4833 scope.go:117] "RemoveContainer" containerID="6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.543644 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hkrj9"] Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.553454 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hkrj9"] Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.567336 4833 scope.go:117] "RemoveContainer" containerID="9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.656698 4833 scope.go:117] "RemoveContainer" containerID="084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059" Jan 27 15:29:53 crc kubenswrapper[4833]: E0127 15:29:53.657103 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059\": container with ID starting with 084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059 not found: ID does not exist" containerID="084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.657138 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059"} err="failed to get container status \"084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059\": rpc error: code = NotFound desc = could not find container \"084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059\": container with ID starting with 084e72fcd1dc7d8b7dadc51f5444a37ca3f060a017cc4092cde02197d0866059 not found: ID does not exist" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.657165 4833 scope.go:117] "RemoveContainer" containerID="6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6" Jan 27 15:29:53 crc kubenswrapper[4833]: E0127 15:29:53.657517 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6\": container with ID starting with 6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6 not found: ID does not exist" containerID="6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.657540 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6"} err="failed to get container status \"6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6\": rpc error: code = NotFound desc = could not find container \"6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6\": container with ID starting with 6ab6e11a36977aceb6a41535e31c6f81d186a675ea435ad38ee4323b196c56a6 not found: ID does not exist" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.657557 4833 scope.go:117] "RemoveContainer" containerID="9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2" Jan 27 15:29:53 crc kubenswrapper[4833]: E0127 15:29:53.657969 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2\": container with ID starting with 9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2 not found: ID does not exist" containerID="9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2" Jan 27 15:29:53 crc kubenswrapper[4833]: I0127 15:29:53.657994 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2"} err="failed to get container status \"9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2\": rpc error: code = NotFound desc = could not find container \"9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2\": container with ID starting with 9602dd3ae4802fbcc91fe05704f4c852ca25050c4ac144995ef57c4c1c7ac9e2 not found: ID does not exist" Jan 27 15:29:54 crc kubenswrapper[4833]: I0127 15:29:54.516633 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerID="48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84" exitCode=0 Jan 27 15:29:54 crc kubenswrapper[4833]: I0127 15:29:54.516720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerDied","Data":"48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84"} Jan 27 15:29:55 crc kubenswrapper[4833]: I0127 15:29:55.271329 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c8e713-1e07-44da-990e-993e1018c903" path="/var/lib/kubelet/pods/e8c8e713-1e07-44da-990e-993e1018c903/volumes" Jan 27 15:29:56 crc kubenswrapper[4833]: I0127 15:29:56.545940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerStarted","Data":"a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd"} Jan 27 15:29:56 crc kubenswrapper[4833]: I0127 15:29:56.584399 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kspkn" podStartSLOduration=3.061564096 podStartE2EDuration="15.584345552s" podCreationTimestamp="2026-01-27 15:29:41 +0000 UTC" firstStartedPulling="2026-01-27 15:29:43.411931909 +0000 UTC m=+4685.063256311" lastFinishedPulling="2026-01-27 15:29:55.934713365 +0000 UTC m=+4697.586037767" observedRunningTime="2026-01-27 15:29:56.571552973 +0000 UTC m=+4698.222877435" watchObservedRunningTime="2026-01-27 15:29:56.584345552 +0000 UTC m=+4698.235669994" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.204880 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr"] Jan 27 15:30:00 crc kubenswrapper[4833]: E0127 15:30:00.205935 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.205953 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4833]: E0127 15:30:00.205983 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="extract-content" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.205991 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="extract-content" Jan 27 15:30:00 crc kubenswrapper[4833]: E0127 15:30:00.206023 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="extract-utilities" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.206032 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="extract-utilities" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.206279 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c8e713-1e07-44da-990e-993e1018c903" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.207159 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.215125 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.215793 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.220466 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr"] Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.239169 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e201ffc-ca6b-4726-8a05-0d2dccddb159-secret-volume\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.239255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e201ffc-ca6b-4726-8a05-0d2dccddb159-config-volume\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.239364 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/0e201ffc-ca6b-4726-8a05-0d2dccddb159-kube-api-access-nf7cl\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.341807 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e201ffc-ca6b-4726-8a05-0d2dccddb159-secret-volume\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.341889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e201ffc-ca6b-4726-8a05-0d2dccddb159-config-volume\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.341990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/0e201ffc-ca6b-4726-8a05-0d2dccddb159-kube-api-access-nf7cl\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.344274 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e201ffc-ca6b-4726-8a05-0d2dccddb159-config-volume\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.358493 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e201ffc-ca6b-4726-8a05-0d2dccddb159-secret-volume\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.368623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/0e201ffc-ca6b-4726-8a05-0d2dccddb159-kube-api-access-nf7cl\") pod \"collect-profiles-29492130-rk7vr\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:00 crc kubenswrapper[4833]: I0127 15:30:00.541249 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:01 crc kubenswrapper[4833]: I0127 15:30:01.018346 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr"] Jan 27 15:30:01 crc kubenswrapper[4833]: W0127 15:30:01.026063 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e201ffc_ca6b_4726_8a05_0d2dccddb159.slice/crio-7eca8d7b470b7f5e9b34f9c9ed25c3b232fd83fe9f35721b86fe4d6128299766 WatchSource:0}: Error finding container 7eca8d7b470b7f5e9b34f9c9ed25c3b232fd83fe9f35721b86fe4d6128299766: Status 404 returned error can't find the container with id 7eca8d7b470b7f5e9b34f9c9ed25c3b232fd83fe9f35721b86fe4d6128299766 Jan 27 15:30:01 crc kubenswrapper[4833]: I0127 15:30:01.595518 4833 generic.go:334] "Generic (PLEG): container finished" podID="0e201ffc-ca6b-4726-8a05-0d2dccddb159" containerID="5a103bc6fe20d726e1de39b6530e488f034e405952a0ea1a4ceda801e93cb525" exitCode=0 Jan 27 15:30:01 crc kubenswrapper[4833]: I0127 15:30:01.595712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" event={"ID":"0e201ffc-ca6b-4726-8a05-0d2dccddb159","Type":"ContainerDied","Data":"5a103bc6fe20d726e1de39b6530e488f034e405952a0ea1a4ceda801e93cb525"} Jan 27 15:30:01 crc kubenswrapper[4833]: I0127 15:30:01.595872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" event={"ID":"0e201ffc-ca6b-4726-8a05-0d2dccddb159","Type":"ContainerStarted","Data":"7eca8d7b470b7f5e9b34f9c9ed25c3b232fd83fe9f35721b86fe4d6128299766"} Jan 27 15:30:01 crc kubenswrapper[4833]: I0127 15:30:01.710975 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:30:01 crc kubenswrapper[4833]: I0127 15:30:01.711026 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:30:02 crc kubenswrapper[4833]: I0127 15:30:02.755477 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kspkn" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="registry-server" probeResult="failure" output=< Jan 27 15:30:02 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Jan 27 15:30:02 crc kubenswrapper[4833]: > Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.442398 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.506931 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/0e201ffc-ca6b-4726-8a05-0d2dccddb159-kube-api-access-nf7cl\") pod \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.507004 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e201ffc-ca6b-4726-8a05-0d2dccddb159-config-volume\") pod \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.507221 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e201ffc-ca6b-4726-8a05-0d2dccddb159-secret-volume\") pod \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\" (UID: \"0e201ffc-ca6b-4726-8a05-0d2dccddb159\") " Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.507840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e201ffc-ca6b-4726-8a05-0d2dccddb159-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e201ffc-ca6b-4726-8a05-0d2dccddb159" (UID: "0e201ffc-ca6b-4726-8a05-0d2dccddb159"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.514828 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e201ffc-ca6b-4726-8a05-0d2dccddb159-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e201ffc-ca6b-4726-8a05-0d2dccddb159" (UID: "0e201ffc-ca6b-4726-8a05-0d2dccddb159"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.514886 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e201ffc-ca6b-4726-8a05-0d2dccddb159-kube-api-access-nf7cl" (OuterVolumeSpecName: "kube-api-access-nf7cl") pod "0e201ffc-ca6b-4726-8a05-0d2dccddb159" (UID: "0e201ffc-ca6b-4726-8a05-0d2dccddb159"). InnerVolumeSpecName "kube-api-access-nf7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.609262 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf7cl\" (UniqueName: \"kubernetes.io/projected/0e201ffc-ca6b-4726-8a05-0d2dccddb159-kube-api-access-nf7cl\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.609616 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e201ffc-ca6b-4726-8a05-0d2dccddb159-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.609627 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e201ffc-ca6b-4726-8a05-0d2dccddb159-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.627162 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" event={"ID":"0e201ffc-ca6b-4726-8a05-0d2dccddb159","Type":"ContainerDied","Data":"7eca8d7b470b7f5e9b34f9c9ed25c3b232fd83fe9f35721b86fe4d6128299766"} Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.627214 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eca8d7b470b7f5e9b34f9c9ed25c3b232fd83fe9f35721b86fe4d6128299766" Jan 27 15:30:03 crc kubenswrapper[4833]: I0127 15:30:03.627290 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-rk7vr" Jan 27 15:30:04 crc kubenswrapper[4833]: I0127 15:30:04.541834 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd"] Jan 27 15:30:04 crc kubenswrapper[4833]: I0127 15:30:04.552944 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-v5pbd"] Jan 27 15:30:05 crc kubenswrapper[4833]: I0127 15:30:05.230346 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51cf794-1fe2-4bdc-ac6e-15e5174d0545" path="/var/lib/kubelet/pods/e51cf794-1fe2-4bdc-ac6e-15e5174d0545/volumes" Jan 27 15:30:11 crc kubenswrapper[4833]: I0127 15:30:11.764405 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:30:11 crc kubenswrapper[4833]: I0127 15:30:11.825115 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:30:12 crc kubenswrapper[4833]: I0127 15:30:12.587334 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kspkn"] Jan 27 15:30:13 crc kubenswrapper[4833]: I0127 15:30:13.732964 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kspkn" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="registry-server" containerID="cri-o://a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd" gracePeriod=2 Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.270911 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.421568 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-utilities\") pod \"ca46096d-c85e-4879-ac3d-53dd185b944c\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.421661 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-catalog-content\") pod \"ca46096d-c85e-4879-ac3d-53dd185b944c\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.421824 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkm7k\" (UniqueName: \"kubernetes.io/projected/ca46096d-c85e-4879-ac3d-53dd185b944c-kube-api-access-wkm7k\") pod \"ca46096d-c85e-4879-ac3d-53dd185b944c\" (UID: \"ca46096d-c85e-4879-ac3d-53dd185b944c\") " Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.422536 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-utilities" (OuterVolumeSpecName: "utilities") pod "ca46096d-c85e-4879-ac3d-53dd185b944c" (UID: "ca46096d-c85e-4879-ac3d-53dd185b944c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.430517 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca46096d-c85e-4879-ac3d-53dd185b944c-kube-api-access-wkm7k" (OuterVolumeSpecName: "kube-api-access-wkm7k") pod "ca46096d-c85e-4879-ac3d-53dd185b944c" (UID: "ca46096d-c85e-4879-ac3d-53dd185b944c"). InnerVolumeSpecName "kube-api-access-wkm7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.524346 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkm7k\" (UniqueName: \"kubernetes.io/projected/ca46096d-c85e-4879-ac3d-53dd185b944c-kube-api-access-wkm7k\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.524386 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.558947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca46096d-c85e-4879-ac3d-53dd185b944c" (UID: "ca46096d-c85e-4879-ac3d-53dd185b944c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.626531 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca46096d-c85e-4879-ac3d-53dd185b944c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.749999 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerID="a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd" exitCode=0 Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.750054 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerDied","Data":"a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd"} Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.750088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kspkn" event={"ID":"ca46096d-c85e-4879-ac3d-53dd185b944c","Type":"ContainerDied","Data":"16f270d43dc9e58ceee7363f69b3db90281efec92063a920b04c5dcf0a8ffb07"} Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.750094 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kspkn" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.750109 4833 scope.go:117] "RemoveContainer" containerID="a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.792660 4833 scope.go:117] "RemoveContainer" containerID="48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.793860 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kspkn"] Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.814637 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kspkn"] Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.822334 4833 scope.go:117] "RemoveContainer" containerID="660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.864934 4833 scope.go:117] "RemoveContainer" containerID="a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd" Jan 27 15:30:14 crc kubenswrapper[4833]: E0127 15:30:14.865402 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd\": container with ID starting with a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd not found: ID does not exist" containerID="a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.865468 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd"} err="failed to get container status \"a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd\": rpc error: code = NotFound desc = could not find container \"a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd\": container with ID starting with a4134b8dedf47972c97350979608c072cf73086c745c855f9398ec2c3622e5dd not found: ID does not exist" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.865509 4833 scope.go:117] "RemoveContainer" containerID="48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84" Jan 27 15:30:14 crc kubenswrapper[4833]: E0127 15:30:14.865786 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84\": container with ID starting with 48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84 not found: ID does not exist" containerID="48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.865819 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84"} err="failed to get container status \"48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84\": rpc error: code = NotFound desc = could not find container \"48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84\": container with ID starting with 48457bdf36a776ccce3544cf4a49b8a0cb3775533da0ffe91e493ebb82640b84 not found: ID does not exist" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.865839 4833 scope.go:117] "RemoveContainer" containerID="660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec" Jan 27 15:30:14 crc kubenswrapper[4833]: E0127 15:30:14.866155 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec\": container with ID starting with 660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec not found: ID does not exist" containerID="660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec" Jan 27 15:30:14 crc kubenswrapper[4833]: I0127 15:30:14.866192 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec"} err="failed to get container status \"660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec\": rpc error: code = NotFound desc = could not find container \"660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec\": container with ID starting with 660d2f3dcf9beb2f36121b07f99804176d2929be89bdbc9bd2d55b9bb86c9fec not found: ID does not exist" Jan 27 15:30:15 crc kubenswrapper[4833]: I0127 15:30:15.226531 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" path="/var/lib/kubelet/pods/ca46096d-c85e-4879-ac3d-53dd185b944c/volumes" Jan 27 15:30:57 crc kubenswrapper[4833]: I0127 15:30:57.249750 4833 scope.go:117] "RemoveContainer" containerID="8b67e3f17764465ecc821aab87a2720fd96ce8cb3e467e419b160736d4042c38" Jan 27 15:31:32 crc kubenswrapper[4833]: I0127 15:31:32.260714 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:31:32 crc kubenswrapper[4833]: I0127 15:31:32.261386 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:32:02 crc kubenswrapper[4833]: I0127 15:32:02.261383 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:32:02 crc kubenswrapper[4833]: I0127 15:32:02.262026 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:32:32 crc kubenswrapper[4833]: I0127 15:32:32.261033 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:32:32 crc kubenswrapper[4833]: I0127 15:32:32.261748 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:32:32 crc kubenswrapper[4833]: I0127 15:32:32.261812 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:32:32 crc kubenswrapper[4833]: I0127 15:32:32.262938 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08531c2a732e5c21aad2303180cc67357633669017fcf13dd509c2e7e2e8c97e"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:32:32 crc kubenswrapper[4833]: I0127 15:32:32.263035 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://08531c2a732e5c21aad2303180cc67357633669017fcf13dd509c2e7e2e8c97e" gracePeriod=600 Jan 27 15:32:33 crc kubenswrapper[4833]: I0127 15:32:33.112300 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="08531c2a732e5c21aad2303180cc67357633669017fcf13dd509c2e7e2e8c97e" exitCode=0 Jan 27 15:32:33 crc kubenswrapper[4833]: I0127 15:32:33.112345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"08531c2a732e5c21aad2303180cc67357633669017fcf13dd509c2e7e2e8c97e"} Jan 27 15:32:33 crc kubenswrapper[4833]: I0127 15:32:33.112405 4833 scope.go:117] "RemoveContainer" containerID="5ec2e522d48da2acfd8e15c9282c5da1e8178be62fa826cf799a6bddadc1b535" Jan 27 15:32:34 crc kubenswrapper[4833]: I0127 15:32:34.127048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d"} Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.260493 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cg8cq"] Jan 27 15:33:06 crc kubenswrapper[4833]: E0127 15:33:06.261804 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e201ffc-ca6b-4726-8a05-0d2dccddb159" containerName="collect-profiles" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.261820 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e201ffc-ca6b-4726-8a05-0d2dccddb159" containerName="collect-profiles" Jan 27 15:33:06 crc kubenswrapper[4833]: E0127 15:33:06.261841 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="registry-server" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.261847 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="registry-server" Jan 27 15:33:06 crc kubenswrapper[4833]: E0127 15:33:06.261875 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="extract-utilities" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.261882 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="extract-utilities" Jan 27 15:33:06 crc kubenswrapper[4833]: E0127 15:33:06.261900 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="extract-content" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.261906 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="extract-content" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.262276 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e201ffc-ca6b-4726-8a05-0d2dccddb159" containerName="collect-profiles" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.262298 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca46096d-c85e-4879-ac3d-53dd185b944c" containerName="registry-server" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.264580 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.271961 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cg8cq"] Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.293994 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-catalog-content\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.294147 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hw8\" (UniqueName: \"kubernetes.io/projected/a1867a7b-aea8-49b1-9c15-e91acb0da116-kube-api-access-b2hw8\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.294475 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-utilities\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.397528 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-catalog-content\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.398098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-catalog-content\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.398390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2hw8\" (UniqueName: \"kubernetes.io/projected/a1867a7b-aea8-49b1-9c15-e91acb0da116-kube-api-access-b2hw8\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.398629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-utilities\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.399339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-utilities\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.422182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2hw8\" (UniqueName: \"kubernetes.io/projected/a1867a7b-aea8-49b1-9c15-e91acb0da116-kube-api-access-b2hw8\") pod \"redhat-marketplace-cg8cq\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:06 crc kubenswrapper[4833]: I0127 15:33:06.599095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:07 crc kubenswrapper[4833]: I0127 15:33:07.170837 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cg8cq"] Jan 27 15:33:07 crc kubenswrapper[4833]: W0127 15:33:07.179788 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1867a7b_aea8_49b1_9c15_e91acb0da116.slice/crio-8a7a0261c96acc69a6621556b84b61d9804e9939c794f1c64bb2a2e9e08f7c6f WatchSource:0}: Error finding container 8a7a0261c96acc69a6621556b84b61d9804e9939c794f1c64bb2a2e9e08f7c6f: Status 404 returned error can't find the container with id 8a7a0261c96acc69a6621556b84b61d9804e9939c794f1c64bb2a2e9e08f7c6f Jan 27 15:33:07 crc kubenswrapper[4833]: I0127 15:33:07.463309 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerStarted","Data":"1ac20d7c019156175f5c9395307d032c387a3b6677a1a7e4f2dce491f095fcec"} Jan 27 15:33:07 crc kubenswrapper[4833]: I0127 15:33:07.463731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerStarted","Data":"8a7a0261c96acc69a6621556b84b61d9804e9939c794f1c64bb2a2e9e08f7c6f"} Jan 27 15:33:08 crc kubenswrapper[4833]: I0127 15:33:08.475907 4833 generic.go:334] "Generic (PLEG): container finished" podID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerID="1ac20d7c019156175f5c9395307d032c387a3b6677a1a7e4f2dce491f095fcec" exitCode=0 Jan 27 15:33:08 crc kubenswrapper[4833]: I0127 15:33:08.476039 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerDied","Data":"1ac20d7c019156175f5c9395307d032c387a3b6677a1a7e4f2dce491f095fcec"} Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.486072 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerStarted","Data":"39c5c8a6953fcd9cc6777fa8fdfec85825c4d41f7a16cc4bf4126e7a086eb259"} Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.834620 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6r7z"] Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.837931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.861692 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6r7z"] Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.875152 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-utilities\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.875232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-catalog-content\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.875289 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrzw\" (UniqueName: \"kubernetes.io/projected/5a54e9b2-38b4-420f-8562-8cdf05123031-kube-api-access-8xrzw\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.976985 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-catalog-content\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.977156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrzw\" (UniqueName: \"kubernetes.io/projected/5a54e9b2-38b4-420f-8562-8cdf05123031-kube-api-access-8xrzw\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.977391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-catalog-content\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.977697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-utilities\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:09 crc kubenswrapper[4833]: I0127 15:33:09.978073 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-utilities\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:10 crc kubenswrapper[4833]: I0127 15:33:10.006997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrzw\" (UniqueName: \"kubernetes.io/projected/5a54e9b2-38b4-420f-8562-8cdf05123031-kube-api-access-8xrzw\") pod \"community-operators-h6r7z\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:10 crc kubenswrapper[4833]: I0127 15:33:10.163925 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:10 crc kubenswrapper[4833]: I0127 15:33:10.528872 4833 generic.go:334] "Generic (PLEG): container finished" podID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerID="39c5c8a6953fcd9cc6777fa8fdfec85825c4d41f7a16cc4bf4126e7a086eb259" exitCode=0 Jan 27 15:33:10 crc kubenswrapper[4833]: I0127 15:33:10.529219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerDied","Data":"39c5c8a6953fcd9cc6777fa8fdfec85825c4d41f7a16cc4bf4126e7a086eb259"} Jan 27 15:33:10 crc kubenswrapper[4833]: I0127 15:33:10.756719 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6r7z"] Jan 27 15:33:11 crc kubenswrapper[4833]: I0127 15:33:11.543937 4833 generic.go:334] "Generic (PLEG): container finished" podID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerID="5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d" exitCode=0 Jan 27 15:33:11 crc kubenswrapper[4833]: I0127 15:33:11.544134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerDied","Data":"5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d"} Jan 27 15:33:11 crc kubenswrapper[4833]: I0127 15:33:11.545161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerStarted","Data":"56928e339ba37380b6b4611f66933ff93235207e7f7a12de7595881e3feab3d5"} Jan 27 15:33:11 crc kubenswrapper[4833]: I0127 15:33:11.553265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerStarted","Data":"72d5a0f77b1926ce65c7833a7cc71cff9af8b10a4a32a054a8cbff88c9937d9f"} Jan 27 15:33:11 crc kubenswrapper[4833]: I0127 15:33:11.601650 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cg8cq" podStartSLOduration=2.017760628 podStartE2EDuration="5.601617512s" podCreationTimestamp="2026-01-27 15:33:06 +0000 UTC" firstStartedPulling="2026-01-27 15:33:07.465035885 +0000 UTC m=+4889.116360287" lastFinishedPulling="2026-01-27 15:33:11.048892769 +0000 UTC m=+4892.700217171" observedRunningTime="2026-01-27 15:33:11.584292249 +0000 UTC m=+4893.235616651" watchObservedRunningTime="2026-01-27 15:33:11.601617512 +0000 UTC m=+4893.252941914" Jan 27 15:33:13 crc kubenswrapper[4833]: I0127 15:33:13.576778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerStarted","Data":"88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b"} Jan 27 15:33:14 crc kubenswrapper[4833]: I0127 15:33:14.589342 4833 generic.go:334] "Generic (PLEG): container finished" podID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerID="88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b" exitCode=0 Jan 27 15:33:14 crc kubenswrapper[4833]: I0127 15:33:14.589398 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerDied","Data":"88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b"} Jan 27 15:33:16 crc kubenswrapper[4833]: I0127 15:33:16.600055 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:16 crc kubenswrapper[4833]: I0127 15:33:16.600632 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:16 crc kubenswrapper[4833]: I0127 15:33:16.613667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerStarted","Data":"952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc"} Jan 27 15:33:16 crc kubenswrapper[4833]: I0127 15:33:16.639913 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6r7z" podStartSLOduration=3.710478252 podStartE2EDuration="7.639881294s" podCreationTimestamp="2026-01-27 15:33:09 +0000 UTC" firstStartedPulling="2026-01-27 15:33:11.54617203 +0000 UTC m=+4893.197496432" lastFinishedPulling="2026-01-27 15:33:15.475575072 +0000 UTC m=+4897.126899474" observedRunningTime="2026-01-27 15:33:16.632924405 +0000 UTC m=+4898.284248807" watchObservedRunningTime="2026-01-27 15:33:16.639881294 +0000 UTC m=+4898.291205696" Jan 27 15:33:16 crc kubenswrapper[4833]: I0127 15:33:16.659428 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:18 crc kubenswrapper[4833]: I0127 15:33:18.165307 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:19 crc kubenswrapper[4833]: I0127 15:33:19.424471 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cg8cq"] Jan 27 15:33:19 crc kubenswrapper[4833]: I0127 15:33:19.640842 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cg8cq" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="registry-server" containerID="cri-o://72d5a0f77b1926ce65c7833a7cc71cff9af8b10a4a32a054a8cbff88c9937d9f" gracePeriod=2 Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.164201 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.164554 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.223175 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.652369 4833 generic.go:334] "Generic (PLEG): container finished" podID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerID="72d5a0f77b1926ce65c7833a7cc71cff9af8b10a4a32a054a8cbff88c9937d9f" exitCode=0 Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.652514 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerDied","Data":"72d5a0f77b1926ce65c7833a7cc71cff9af8b10a4a32a054a8cbff88c9937d9f"} Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.718159 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.805308 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.913237 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2hw8\" (UniqueName: \"kubernetes.io/projected/a1867a7b-aea8-49b1-9c15-e91acb0da116-kube-api-access-b2hw8\") pod \"a1867a7b-aea8-49b1-9c15-e91acb0da116\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.913468 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-catalog-content\") pod \"a1867a7b-aea8-49b1-9c15-e91acb0da116\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.913578 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-utilities\") pod \"a1867a7b-aea8-49b1-9c15-e91acb0da116\" (UID: \"a1867a7b-aea8-49b1-9c15-e91acb0da116\") " Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.915208 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-utilities" (OuterVolumeSpecName: "utilities") pod "a1867a7b-aea8-49b1-9c15-e91acb0da116" (UID: "a1867a7b-aea8-49b1-9c15-e91acb0da116"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.920418 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1867a7b-aea8-49b1-9c15-e91acb0da116-kube-api-access-b2hw8" (OuterVolumeSpecName: "kube-api-access-b2hw8") pod "a1867a7b-aea8-49b1-9c15-e91acb0da116" (UID: "a1867a7b-aea8-49b1-9c15-e91acb0da116"). InnerVolumeSpecName "kube-api-access-b2hw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:20 crc kubenswrapper[4833]: I0127 15:33:20.937134 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1867a7b-aea8-49b1-9c15-e91acb0da116" (UID: "a1867a7b-aea8-49b1-9c15-e91acb0da116"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.015918 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.015970 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2hw8\" (UniqueName: \"kubernetes.io/projected/a1867a7b-aea8-49b1-9c15-e91acb0da116-kube-api-access-b2hw8\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.015983 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1867a7b-aea8-49b1-9c15-e91acb0da116-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.665976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cg8cq" event={"ID":"a1867a7b-aea8-49b1-9c15-e91acb0da116","Type":"ContainerDied","Data":"8a7a0261c96acc69a6621556b84b61d9804e9939c794f1c64bb2a2e9e08f7c6f"} Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.666282 4833 scope.go:117] "RemoveContainer" containerID="72d5a0f77b1926ce65c7833a7cc71cff9af8b10a4a32a054a8cbff88c9937d9f" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.665999 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cg8cq" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.690275 4833 scope.go:117] "RemoveContainer" containerID="39c5c8a6953fcd9cc6777fa8fdfec85825c4d41f7a16cc4bf4126e7a086eb259" Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.694381 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cg8cq"] Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.705853 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cg8cq"] Jan 27 15:33:21 crc kubenswrapper[4833]: I0127 15:33:21.715883 4833 scope.go:117] "RemoveContainer" containerID="1ac20d7c019156175f5c9395307d032c387a3b6677a1a7e4f2dce491f095fcec" Jan 27 15:33:22 crc kubenswrapper[4833]: I0127 15:33:22.630500 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6r7z"] Jan 27 15:33:22 crc kubenswrapper[4833]: I0127 15:33:22.675097 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h6r7z" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="registry-server" containerID="cri-o://952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc" gracePeriod=2 Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.232648 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" path="/var/lib/kubelet/pods/a1867a7b-aea8-49b1-9c15-e91acb0da116/volumes" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.314889 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.362764 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xrzw\" (UniqueName: \"kubernetes.io/projected/5a54e9b2-38b4-420f-8562-8cdf05123031-kube-api-access-8xrzw\") pod \"5a54e9b2-38b4-420f-8562-8cdf05123031\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.362857 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-catalog-content\") pod \"5a54e9b2-38b4-420f-8562-8cdf05123031\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.363076 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-utilities\") pod \"5a54e9b2-38b4-420f-8562-8cdf05123031\" (UID: \"5a54e9b2-38b4-420f-8562-8cdf05123031\") " Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.364050 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-utilities" (OuterVolumeSpecName: "utilities") pod "5a54e9b2-38b4-420f-8562-8cdf05123031" (UID: "5a54e9b2-38b4-420f-8562-8cdf05123031"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.369695 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a54e9b2-38b4-420f-8562-8cdf05123031-kube-api-access-8xrzw" (OuterVolumeSpecName: "kube-api-access-8xrzw") pod "5a54e9b2-38b4-420f-8562-8cdf05123031" (UID: "5a54e9b2-38b4-420f-8562-8cdf05123031"). InnerVolumeSpecName "kube-api-access-8xrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.423004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a54e9b2-38b4-420f-8562-8cdf05123031" (UID: "5a54e9b2-38b4-420f-8562-8cdf05123031"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.466686 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.466723 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a54e9b2-38b4-420f-8562-8cdf05123031-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.466734 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xrzw\" (UniqueName: \"kubernetes.io/projected/5a54e9b2-38b4-420f-8562-8cdf05123031-kube-api-access-8xrzw\") on node \"crc\" DevicePath \"\"" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.691205 4833 generic.go:334] "Generic (PLEG): container finished" podID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerID="952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc" exitCode=0 Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.691275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerDied","Data":"952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc"} Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.691347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6r7z" event={"ID":"5a54e9b2-38b4-420f-8562-8cdf05123031","Type":"ContainerDied","Data":"56928e339ba37380b6b4611f66933ff93235207e7f7a12de7595881e3feab3d5"} Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.691340 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6r7z" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.691371 4833 scope.go:117] "RemoveContainer" containerID="952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.721204 4833 scope.go:117] "RemoveContainer" containerID="88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.752093 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6r7z"] Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.754930 4833 scope.go:117] "RemoveContainer" containerID="5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.769991 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h6r7z"] Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.815157 4833 scope.go:117] "RemoveContainer" containerID="952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc" Jan 27 15:33:23 crc kubenswrapper[4833]: E0127 15:33:23.816107 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc\": container with ID starting with 952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc not found: ID does not exist" containerID="952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.816155 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc"} err="failed to get container status \"952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc\": rpc error: code = NotFound desc = could not find container \"952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc\": container with ID starting with 952310cbadb4036a36b46bffacfada73d331c41be07a9c1e052b44e03e2c6ecc not found: ID does not exist" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.816193 4833 scope.go:117] "RemoveContainer" containerID="88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b" Jan 27 15:33:23 crc kubenswrapper[4833]: E0127 15:33:23.816780 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b\": container with ID starting with 88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b not found: ID does not exist" containerID="88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.816839 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b"} err="failed to get container status \"88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b\": rpc error: code = NotFound desc = could not find container \"88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b\": container with ID starting with 88b2c29624cc09c27df0847e145cc9fd114b17016eb0e1bfd91d31d7d1bf051b not found: ID does not exist" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.816873 4833 scope.go:117] "RemoveContainer" containerID="5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d" Jan 27 15:33:23 crc kubenswrapper[4833]: E0127 15:33:23.817272 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d\": container with ID starting with 5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d not found: ID does not exist" containerID="5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d" Jan 27 15:33:23 crc kubenswrapper[4833]: I0127 15:33:23.817295 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d"} err="failed to get container status \"5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d\": rpc error: code = NotFound desc = could not find container \"5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d\": container with ID starting with 5eb21c3351e0061dd9c17cb0574abd118795335e618998886d5ca8c4be71c72d not found: ID does not exist" Jan 27 15:33:25 crc kubenswrapper[4833]: I0127 15:33:25.225187 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" path="/var/lib/kubelet/pods/5a54e9b2-38b4-420f-8562-8cdf05123031/volumes" Jan 27 15:35:02 crc kubenswrapper[4833]: I0127 15:35:02.261094 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:35:02 crc kubenswrapper[4833]: I0127 15:35:02.261692 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:35:32 crc kubenswrapper[4833]: I0127 15:35:32.260938 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:35:32 crc kubenswrapper[4833]: I0127 15:35:32.261354 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:36:02 crc kubenswrapper[4833]: I0127 15:36:02.260719 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:36:02 crc kubenswrapper[4833]: I0127 15:36:02.261279 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:36:02 crc kubenswrapper[4833]: I0127 15:36:02.261338 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:36:02 crc kubenswrapper[4833]: I0127 15:36:02.261843 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:36:02 crc kubenswrapper[4833]: I0127 15:36:02.261977 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" gracePeriod=600 Jan 27 15:36:02 crc kubenswrapper[4833]: E0127 15:36:02.385084 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:36:03 crc kubenswrapper[4833]: I0127 15:36:03.187066 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" exitCode=0 Jan 27 15:36:03 crc kubenswrapper[4833]: I0127 15:36:03.187118 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d"} Jan 27 15:36:03 crc kubenswrapper[4833]: I0127 15:36:03.187153 4833 scope.go:117] "RemoveContainer" containerID="08531c2a732e5c21aad2303180cc67357633669017fcf13dd509c2e7e2e8c97e" Jan 27 15:36:03 crc kubenswrapper[4833]: I0127 15:36:03.187993 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:36:03 crc kubenswrapper[4833]: E0127 15:36:03.188649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:36:16 crc kubenswrapper[4833]: I0127 15:36:16.211291 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:36:16 crc kubenswrapper[4833]: E0127 15:36:16.212014 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:36:28 crc kubenswrapper[4833]: I0127 15:36:28.210783 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:36:28 crc kubenswrapper[4833]: E0127 15:36:28.211656 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:36:43 crc kubenswrapper[4833]: I0127 15:36:43.210593 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:36:43 crc kubenswrapper[4833]: E0127 15:36:43.211358 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:36:56 crc kubenswrapper[4833]: I0127 15:36:56.210786 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:36:56 crc kubenswrapper[4833]: E0127 15:36:56.211551 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:37:11 crc kubenswrapper[4833]: I0127 15:37:11.210977 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:37:11 crc kubenswrapper[4833]: E0127 15:37:11.212794 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:37:24 crc kubenswrapper[4833]: I0127 15:37:24.212233 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:37:24 crc kubenswrapper[4833]: E0127 15:37:24.213925 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:37:36 crc kubenswrapper[4833]: I0127 15:37:36.211103 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:37:36 crc kubenswrapper[4833]: E0127 15:37:36.212155 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:37:47 crc kubenswrapper[4833]: I0127 15:37:47.212196 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:37:47 crc kubenswrapper[4833]: E0127 15:37:47.213001 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:37:58 crc kubenswrapper[4833]: I0127 15:37:58.211585 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:37:58 crc kubenswrapper[4833]: E0127 15:37:58.212609 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:38:06 crc kubenswrapper[4833]: I0127 15:38:06.574598 4833 generic.go:334] "Generic (PLEG): container finished" podID="5faf0556-c6da-4c93-9ce7-f02ed716c092" containerID="816877fbc5e07b2a5a0ff947761635c08a36ab7703cfeb21646f99b3061cea93" exitCode=1 Jan 27 15:38:06 crc kubenswrapper[4833]: I0127 15:38:06.574715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5faf0556-c6da-4c93-9ce7-f02ed716c092","Type":"ContainerDied","Data":"816877fbc5e07b2a5a0ff947761635c08a36ab7703cfeb21646f99b3061cea93"} Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.329696 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.518935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519008 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config-secret\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519105 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ssh-key\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519125 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-workdir\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519172 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-temporary\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519205 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ca-certs\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519246 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519325 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2r76\" (UniqueName: \"kubernetes.io/projected/5faf0556-c6da-4c93-9ce7-f02ed716c092-kube-api-access-g2r76\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.519420 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-config-data\") pod \"5faf0556-c6da-4c93-9ce7-f02ed716c092\" (UID: \"5faf0556-c6da-4c93-9ce7-f02ed716c092\") " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.520166 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.520664 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-config-data" (OuterVolumeSpecName: "config-data") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.533529 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.534585 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5faf0556-c6da-4c93-9ce7-f02ed716c092-kube-api-access-g2r76" (OuterVolumeSpecName: "kube-api-access-g2r76") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "kube-api-access-g2r76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.550195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.550893 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.553931 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.587432 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.595530 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5faf0556-c6da-4c93-9ce7-f02ed716c092","Type":"ContainerDied","Data":"d0096a610a6c73ab5149d37e54f391d63865b5ab11cb0cfd225a7f5a0292882f"} Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.595569 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0096a610a6c73ab5149d37e54f391d63865b5ab11cb0cfd225a7f5a0292882f" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.595626 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623263 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623306 4833 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623324 4833 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623359 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623374 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2r76\" (UniqueName: \"kubernetes.io/projected/5faf0556-c6da-4c93-9ce7-f02ed716c092-kube-api-access-g2r76\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623386 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623397 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623408 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5faf0556-c6da-4c93-9ce7-f02ed716c092-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.623771 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5faf0556-c6da-4c93-9ce7-f02ed716c092" (UID: "5faf0556-c6da-4c93-9ce7-f02ed716c092"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.645843 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.725549 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:08 crc kubenswrapper[4833]: I0127 15:38:08.725578 4833 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5faf0556-c6da-4c93-9ce7-f02ed716c092-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:12 crc kubenswrapper[4833]: I0127 15:38:12.210997 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:38:12 crc kubenswrapper[4833]: E0127 15:38:12.212152 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.762331 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763242 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="extract-content" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763260 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="extract-content" Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763274 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="extract-utilities" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763281 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="extract-utilities" Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763303 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="extract-utilities" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763313 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="extract-utilities" Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763332 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="extract-content" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763338 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="extract-content" Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763353 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faf0556-c6da-4c93-9ce7-f02ed716c092" containerName="tempest-tests-tempest-tests-runner" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763361 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faf0556-c6da-4c93-9ce7-f02ed716c092" containerName="tempest-tests-tempest-tests-runner" Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763379 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="registry-server" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763386 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="registry-server" Jan 27 15:38:17 crc kubenswrapper[4833]: E0127 15:38:17.763397 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="registry-server" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763405 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="registry-server" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763612 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5faf0556-c6da-4c93-9ce7-f02ed716c092" containerName="tempest-tests-tempest-tests-runner" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763716 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1867a7b-aea8-49b1-9c15-e91acb0da116" containerName="registry-server" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.763730 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a54e9b2-38b4-420f-8562-8cdf05123031" containerName="registry-server" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.764553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.768288 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bvjzm" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.780473 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.919621 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:17 crc kubenswrapper[4833]: I0127 15:38:17.919715 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87j6\" (UniqueName: \"kubernetes.io/projected/c501d21c-94f8-4a2f-966c-65c04b362809-kube-api-access-h87j6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.021988 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.022073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h87j6\" (UniqueName: \"kubernetes.io/projected/c501d21c-94f8-4a2f-966c-65c04b362809-kube-api-access-h87j6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.022317 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.043576 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h87j6\" (UniqueName: \"kubernetes.io/projected/c501d21c-94f8-4a2f-966c-65c04b362809-kube-api-access-h87j6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.048751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c501d21c-94f8-4a2f-966c-65c04b362809\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.085549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.522095 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.522375 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 15:38:18 crc kubenswrapper[4833]: I0127 15:38:18.713429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c501d21c-94f8-4a2f-966c-65c04b362809","Type":"ContainerStarted","Data":"23a86e3ea57dafca4119406c873d9d3afcaad24866e66a43bc075530ad5b9f16"} Jan 27 15:38:20 crc kubenswrapper[4833]: I0127 15:38:20.735491 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c501d21c-94f8-4a2f-966c-65c04b362809","Type":"ContainerStarted","Data":"5ac48339cb67a88a24ff57c023ec523c496505e9b39324101a334e700d9da072"} Jan 27 15:38:20 crc kubenswrapper[4833]: I0127 15:38:20.750537 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.9850222560000002 podStartE2EDuration="3.750520784s" podCreationTimestamp="2026-01-27 15:38:17 +0000 UTC" firstStartedPulling="2026-01-27 15:38:18.521865778 +0000 UTC m=+5200.173190180" lastFinishedPulling="2026-01-27 15:38:20.287364306 +0000 UTC m=+5201.938688708" observedRunningTime="2026-01-27 15:38:20.749494889 +0000 UTC m=+5202.400819291" watchObservedRunningTime="2026-01-27 15:38:20.750520784 +0000 UTC m=+5202.401845186" Jan 27 15:38:25 crc kubenswrapper[4833]: I0127 15:38:25.211582 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:38:25 crc kubenswrapper[4833]: E0127 15:38:25.212659 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:38:36 crc kubenswrapper[4833]: I0127 15:38:36.209911 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:38:36 crc kubenswrapper[4833]: E0127 15:38:36.210667 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:38:47 crc kubenswrapper[4833]: I0127 15:38:47.210909 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:38:47 crc kubenswrapper[4833]: E0127 15:38:47.211696 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.210907 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:38:58 crc kubenswrapper[4833]: E0127 15:38:58.211595 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.476682 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jwx6p/must-gather-vpbbz"] Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.492743 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jwx6p/must-gather-vpbbz"] Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.492903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.497271 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jwx6p"/"default-dockercfg-m5sg2" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.497329 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jwx6p"/"openshift-service-ca.crt" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.504298 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jwx6p"/"kube-root-ca.crt" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.673286 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd023-060e-43f3-80ab-c85ce4f45b63-must-gather-output\") pod \"must-gather-vpbbz\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.673528 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67zf\" (UniqueName: \"kubernetes.io/projected/ba1dd023-060e-43f3-80ab-c85ce4f45b63-kube-api-access-f67zf\") pod \"must-gather-vpbbz\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.775848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67zf\" (UniqueName: \"kubernetes.io/projected/ba1dd023-060e-43f3-80ab-c85ce4f45b63-kube-api-access-f67zf\") pod \"must-gather-vpbbz\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.775943 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd023-060e-43f3-80ab-c85ce4f45b63-must-gather-output\") pod \"must-gather-vpbbz\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.776378 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd023-060e-43f3-80ab-c85ce4f45b63-must-gather-output\") pod \"must-gather-vpbbz\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.796179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67zf\" (UniqueName: \"kubernetes.io/projected/ba1dd023-060e-43f3-80ab-c85ce4f45b63-kube-api-access-f67zf\") pod \"must-gather-vpbbz\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:58 crc kubenswrapper[4833]: I0127 15:38:58.821883 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:38:59 crc kubenswrapper[4833]: I0127 15:38:59.179350 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jwx6p/must-gather-vpbbz"] Jan 27 15:39:00 crc kubenswrapper[4833]: I0127 15:39:00.105272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" event={"ID":"ba1dd023-060e-43f3-80ab-c85ce4f45b63","Type":"ContainerStarted","Data":"c9ab5873ff3c531f499cb524b2cced523f69921fd42abb50543dc34908b67d1c"} Jan 27 15:39:07 crc kubenswrapper[4833]: I0127 15:39:07.185403 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" event={"ID":"ba1dd023-060e-43f3-80ab-c85ce4f45b63","Type":"ContainerStarted","Data":"0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd"} Jan 27 15:39:07 crc kubenswrapper[4833]: I0127 15:39:07.186098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" event={"ID":"ba1dd023-060e-43f3-80ab-c85ce4f45b63","Type":"ContainerStarted","Data":"36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f"} Jan 27 15:39:07 crc kubenswrapper[4833]: I0127 15:39:07.211035 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" podStartSLOduration=2.680850928 podStartE2EDuration="9.211014803s" podCreationTimestamp="2026-01-27 15:38:58 +0000 UTC" firstStartedPulling="2026-01-27 15:38:59.182997725 +0000 UTC m=+5240.834322117" lastFinishedPulling="2026-01-27 15:39:05.71316159 +0000 UTC m=+5247.364485992" observedRunningTime="2026-01-27 15:39:07.201816969 +0000 UTC m=+5248.853141391" watchObservedRunningTime="2026-01-27 15:39:07.211014803 +0000 UTC m=+5248.862339225" Jan 27 15:39:09 crc kubenswrapper[4833]: I0127 15:39:09.818206 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-hdcgn"] Jan 27 15:39:09 crc kubenswrapper[4833]: I0127 15:39:09.819982 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:09 crc kubenswrapper[4833]: I0127 15:39:09.959977 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4nwh\" (UniqueName: \"kubernetes.io/projected/b834f46b-cf18-41f5-b862-8b9004efdb31-kube-api-access-b4nwh\") pod \"crc-debug-hdcgn\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:09 crc kubenswrapper[4833]: I0127 15:39:09.960287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b834f46b-cf18-41f5-b862-8b9004efdb31-host\") pod \"crc-debug-hdcgn\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:10 crc kubenswrapper[4833]: I0127 15:39:10.061863 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4nwh\" (UniqueName: \"kubernetes.io/projected/b834f46b-cf18-41f5-b862-8b9004efdb31-kube-api-access-b4nwh\") pod \"crc-debug-hdcgn\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:10 crc kubenswrapper[4833]: I0127 15:39:10.062062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b834f46b-cf18-41f5-b862-8b9004efdb31-host\") pod \"crc-debug-hdcgn\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:10 crc kubenswrapper[4833]: I0127 15:39:10.062197 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b834f46b-cf18-41f5-b862-8b9004efdb31-host\") pod \"crc-debug-hdcgn\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:10 crc kubenswrapper[4833]: I0127 15:39:10.095312 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4nwh\" (UniqueName: \"kubernetes.io/projected/b834f46b-cf18-41f5-b862-8b9004efdb31-kube-api-access-b4nwh\") pod \"crc-debug-hdcgn\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:10 crc kubenswrapper[4833]: I0127 15:39:10.147216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:39:10 crc kubenswrapper[4833]: W0127 15:39:10.183698 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb834f46b_cf18_41f5_b862_8b9004efdb31.slice/crio-57ac5c74fa967c948cfef4d3b4822b90374d52a70e5acaef1d2ce45e336f80e4 WatchSource:0}: Error finding container 57ac5c74fa967c948cfef4d3b4822b90374d52a70e5acaef1d2ce45e336f80e4: Status 404 returned error can't find the container with id 57ac5c74fa967c948cfef4d3b4822b90374d52a70e5acaef1d2ce45e336f80e4 Jan 27 15:39:10 crc kubenswrapper[4833]: I0127 15:39:10.255675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" event={"ID":"b834f46b-cf18-41f5-b862-8b9004efdb31","Type":"ContainerStarted","Data":"57ac5c74fa967c948cfef4d3b4822b90374d52a70e5acaef1d2ce45e336f80e4"} Jan 27 15:39:11 crc kubenswrapper[4833]: I0127 15:39:11.210406 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:39:11 crc kubenswrapper[4833]: E0127 15:39:11.211223 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:39:22 crc kubenswrapper[4833]: I0127 15:39:22.375098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" event={"ID":"b834f46b-cf18-41f5-b862-8b9004efdb31","Type":"ContainerStarted","Data":"a147cb31e8720fc2b828f77ef29765fe3be4dc7910650d2d5770c2618878e8be"} Jan 27 15:39:22 crc kubenswrapper[4833]: I0127 15:39:22.392086 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" podStartSLOduration=1.809082009 podStartE2EDuration="13.39206855s" podCreationTimestamp="2026-01-27 15:39:09 +0000 UTC" firstStartedPulling="2026-01-27 15:39:10.192539174 +0000 UTC m=+5251.843863566" lastFinishedPulling="2026-01-27 15:39:21.775525695 +0000 UTC m=+5263.426850107" observedRunningTime="2026-01-27 15:39:22.390299537 +0000 UTC m=+5264.041623939" watchObservedRunningTime="2026-01-27 15:39:22.39206855 +0000 UTC m=+5264.043392952" Jan 27 15:39:25 crc kubenswrapper[4833]: I0127 15:39:25.211337 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:39:25 crc kubenswrapper[4833]: E0127 15:39:25.212118 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:39:37 crc kubenswrapper[4833]: I0127 15:39:37.211006 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:39:37 crc kubenswrapper[4833]: E0127 15:39:37.211833 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:39:48 crc kubenswrapper[4833]: I0127 15:39:48.210722 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:39:48 crc kubenswrapper[4833]: E0127 15:39:48.211540 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.145373 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ml2wg"] Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.157119 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ml2wg"] Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.157231 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.238856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-catalog-content\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.238919 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-utilities\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.239029 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29t5\" (UniqueName: \"kubernetes.io/projected/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-kube-api-access-r29t5\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.341251 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29t5\" (UniqueName: \"kubernetes.io/projected/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-kube-api-access-r29t5\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.341711 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-catalog-content\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.341800 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-utilities\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.342345 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-catalog-content\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.342427 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-utilities\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.366481 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29t5\" (UniqueName: \"kubernetes.io/projected/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-kube-api-access-r29t5\") pod \"certified-operators-ml2wg\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:55 crc kubenswrapper[4833]: I0127 15:39:55.489437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:39:56 crc kubenswrapper[4833]: I0127 15:39:56.032197 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ml2wg"] Jan 27 15:39:56 crc kubenswrapper[4833]: I0127 15:39:56.722975 4833 generic.go:334] "Generic (PLEG): container finished" podID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerID="0123911d1436ec6a1e7ec65841c209b6b052cda1334e787b857dc9d5b6bef0ad" exitCode=0 Jan 27 15:39:56 crc kubenswrapper[4833]: I0127 15:39:56.723087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml2wg" event={"ID":"1745b7a7-0e70-4f97-8b93-12bb398d6c0c","Type":"ContainerDied","Data":"0123911d1436ec6a1e7ec65841c209b6b052cda1334e787b857dc9d5b6bef0ad"} Jan 27 15:39:56 crc kubenswrapper[4833]: I0127 15:39:56.723339 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml2wg" event={"ID":"1745b7a7-0e70-4f97-8b93-12bb398d6c0c","Type":"ContainerStarted","Data":"6224e2880e9be5295a657a89dc437b6d4bf5d1cb23de839eba184be6f509c160"} Jan 27 15:39:58 crc kubenswrapper[4833]: I0127 15:39:58.742877 4833 generic.go:334] "Generic (PLEG): container finished" podID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerID="2b205fecf33fdaa4a2a3990c059f4f0079c2a6d19b3b45f9c1f62462f8acc9c5" exitCode=0 Jan 27 15:39:58 crc kubenswrapper[4833]: I0127 15:39:58.743386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml2wg" event={"ID":"1745b7a7-0e70-4f97-8b93-12bb398d6c0c","Type":"ContainerDied","Data":"2b205fecf33fdaa4a2a3990c059f4f0079c2a6d19b3b45f9c1f62462f8acc9c5"} Jan 27 15:39:59 crc kubenswrapper[4833]: I0127 15:39:59.754760 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml2wg" event={"ID":"1745b7a7-0e70-4f97-8b93-12bb398d6c0c","Type":"ContainerStarted","Data":"6ebf875a475bdf5d4ee132a8d7abb37636f6cd53c59153a4dfc474f90a7af6fd"} Jan 27 15:39:59 crc kubenswrapper[4833]: I0127 15:39:59.787012 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ml2wg" podStartSLOduration=2.240888829 podStartE2EDuration="4.786987499s" podCreationTimestamp="2026-01-27 15:39:55 +0000 UTC" firstStartedPulling="2026-01-27 15:39:56.725116991 +0000 UTC m=+5298.376441393" lastFinishedPulling="2026-01-27 15:39:59.271215661 +0000 UTC m=+5300.922540063" observedRunningTime="2026-01-27 15:39:59.774758469 +0000 UTC m=+5301.426082871" watchObservedRunningTime="2026-01-27 15:39:59.786987499 +0000 UTC m=+5301.438311901" Jan 27 15:40:00 crc kubenswrapper[4833]: I0127 15:40:00.211755 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:40:00 crc kubenswrapper[4833]: E0127 15:40:00.211975 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:40:05 crc kubenswrapper[4833]: I0127 15:40:05.490255 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:40:05 crc kubenswrapper[4833]: I0127 15:40:05.491202 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:40:05 crc kubenswrapper[4833]: I0127 15:40:05.547268 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:40:05 crc kubenswrapper[4833]: I0127 15:40:05.868021 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:40:05 crc kubenswrapper[4833]: I0127 15:40:05.917810 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ml2wg"] Jan 27 15:40:07 crc kubenswrapper[4833]: I0127 15:40:07.838355 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ml2wg" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="registry-server" containerID="cri-o://6ebf875a475bdf5d4ee132a8d7abb37636f6cd53c59153a4dfc474f90a7af6fd" gracePeriod=2 Jan 27 15:40:08 crc kubenswrapper[4833]: I0127 15:40:08.852380 4833 generic.go:334] "Generic (PLEG): container finished" podID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerID="6ebf875a475bdf5d4ee132a8d7abb37636f6cd53c59153a4dfc474f90a7af6fd" exitCode=0 Jan 27 15:40:08 crc kubenswrapper[4833]: I0127 15:40:08.852690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml2wg" event={"ID":"1745b7a7-0e70-4f97-8b93-12bb398d6c0c","Type":"ContainerDied","Data":"6ebf875a475bdf5d4ee132a8d7abb37636f6cd53c59153a4dfc474f90a7af6fd"} Jan 27 15:40:08 crc kubenswrapper[4833]: I0127 15:40:08.853093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ml2wg" event={"ID":"1745b7a7-0e70-4f97-8b93-12bb398d6c0c","Type":"ContainerDied","Data":"6224e2880e9be5295a657a89dc437b6d4bf5d1cb23de839eba184be6f509c160"} Jan 27 15:40:08 crc kubenswrapper[4833]: I0127 15:40:08.853183 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6224e2880e9be5295a657a89dc437b6d4bf5d1cb23de839eba184be6f509c160" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.019211 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.157942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-utilities\") pod \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.158001 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-catalog-content\") pod \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.158300 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r29t5\" (UniqueName: \"kubernetes.io/projected/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-kube-api-access-r29t5\") pod \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\" (UID: \"1745b7a7-0e70-4f97-8b93-12bb398d6c0c\") " Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.159101 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-utilities" (OuterVolumeSpecName: "utilities") pod "1745b7a7-0e70-4f97-8b93-12bb398d6c0c" (UID: "1745b7a7-0e70-4f97-8b93-12bb398d6c0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.159461 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.165737 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-kube-api-access-r29t5" (OuterVolumeSpecName: "kube-api-access-r29t5") pod "1745b7a7-0e70-4f97-8b93-12bb398d6c0c" (UID: "1745b7a7-0e70-4f97-8b93-12bb398d6c0c"). InnerVolumeSpecName "kube-api-access-r29t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.205647 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1745b7a7-0e70-4f97-8b93-12bb398d6c0c" (UID: "1745b7a7-0e70-4f97-8b93-12bb398d6c0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.262057 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r29t5\" (UniqueName: \"kubernetes.io/projected/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-kube-api-access-r29t5\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.262100 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1745b7a7-0e70-4f97-8b93-12bb398d6c0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.860668 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ml2wg" Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.896963 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ml2wg"] Jan 27 15:40:09 crc kubenswrapper[4833]: I0127 15:40:09.905154 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ml2wg"] Jan 27 15:40:11 crc kubenswrapper[4833]: I0127 15:40:11.221762 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" path="/var/lib/kubelet/pods/1745b7a7-0e70-4f97-8b93-12bb398d6c0c/volumes" Jan 27 15:40:12 crc kubenswrapper[4833]: I0127 15:40:12.211018 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:40:12 crc kubenswrapper[4833]: E0127 15:40:12.211332 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:40:15 crc kubenswrapper[4833]: I0127 15:40:15.923696 4833 generic.go:334] "Generic (PLEG): container finished" podID="b834f46b-cf18-41f5-b862-8b9004efdb31" containerID="a147cb31e8720fc2b828f77ef29765fe3be4dc7910650d2d5770c2618878e8be" exitCode=0 Jan 27 15:40:15 crc kubenswrapper[4833]: I0127 15:40:15.923808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" event={"ID":"b834f46b-cf18-41f5-b862-8b9004efdb31","Type":"ContainerDied","Data":"a147cb31e8720fc2b828f77ef29765fe3be4dc7910650d2d5770c2618878e8be"} Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.046695 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.091934 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-hdcgn"] Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.100913 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-hdcgn"] Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.209931 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4nwh\" (UniqueName: \"kubernetes.io/projected/b834f46b-cf18-41f5-b862-8b9004efdb31-kube-api-access-b4nwh\") pod \"b834f46b-cf18-41f5-b862-8b9004efdb31\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.210262 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b834f46b-cf18-41f5-b862-8b9004efdb31-host\") pod \"b834f46b-cf18-41f5-b862-8b9004efdb31\" (UID: \"b834f46b-cf18-41f5-b862-8b9004efdb31\") " Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.210415 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b834f46b-cf18-41f5-b862-8b9004efdb31-host" (OuterVolumeSpecName: "host") pod "b834f46b-cf18-41f5-b862-8b9004efdb31" (UID: "b834f46b-cf18-41f5-b862-8b9004efdb31"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.210884 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b834f46b-cf18-41f5-b862-8b9004efdb31-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.217907 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b834f46b-cf18-41f5-b862-8b9004efdb31-kube-api-access-b4nwh" (OuterVolumeSpecName: "kube-api-access-b4nwh") pod "b834f46b-cf18-41f5-b862-8b9004efdb31" (UID: "b834f46b-cf18-41f5-b862-8b9004efdb31"). InnerVolumeSpecName "kube-api-access-b4nwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.222597 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b834f46b-cf18-41f5-b862-8b9004efdb31" path="/var/lib/kubelet/pods/b834f46b-cf18-41f5-b862-8b9004efdb31/volumes" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.313748 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4nwh\" (UniqueName: \"kubernetes.io/projected/b834f46b-cf18-41f5-b862-8b9004efdb31-kube-api-access-b4nwh\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.943682 4833 scope.go:117] "RemoveContainer" containerID="a147cb31e8720fc2b828f77ef29765fe3be4dc7910650d2d5770c2618878e8be" Jan 27 15:40:17 crc kubenswrapper[4833]: I0127 15:40:17.943754 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-hdcgn" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.287792 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-wtgp6"] Jan 27 15:40:18 crc kubenswrapper[4833]: E0127 15:40:18.288413 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="registry-server" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.288429 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="registry-server" Jan 27 15:40:18 crc kubenswrapper[4833]: E0127 15:40:18.288482 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="extract-utilities" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.288489 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="extract-utilities" Jan 27 15:40:18 crc kubenswrapper[4833]: E0127 15:40:18.288505 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="extract-content" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.288517 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="extract-content" Jan 27 15:40:18 crc kubenswrapper[4833]: E0127 15:40:18.288539 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b834f46b-cf18-41f5-b862-8b9004efdb31" containerName="container-00" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.288547 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b834f46b-cf18-41f5-b862-8b9004efdb31" containerName="container-00" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.288784 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1745b7a7-0e70-4f97-8b93-12bb398d6c0c" containerName="registry-server" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.288798 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b834f46b-cf18-41f5-b862-8b9004efdb31" containerName="container-00" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.289423 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.436770 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-host\") pod \"crc-debug-wtgp6\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.436911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnn8m\" (UniqueName: \"kubernetes.io/projected/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-kube-api-access-jnn8m\") pod \"crc-debug-wtgp6\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.538918 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-host\") pod \"crc-debug-wtgp6\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.538981 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnn8m\" (UniqueName: \"kubernetes.io/projected/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-kube-api-access-jnn8m\") pod \"crc-debug-wtgp6\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.539040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-host\") pod \"crc-debug-wtgp6\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.558029 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnn8m\" (UniqueName: \"kubernetes.io/projected/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-kube-api-access-jnn8m\") pod \"crc-debug-wtgp6\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:18 crc kubenswrapper[4833]: I0127 15:40:18.606341 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:19 crc kubenswrapper[4833]: I0127 15:40:19.962752 4833 generic.go:334] "Generic (PLEG): container finished" podID="1e2b3d08-c07c-4fea-ab63-2c2181719bb5" containerID="17933e07924efdf593de31797ab32cf393207f2021b1e5cef482394b8fc61364" exitCode=0 Jan 27 15:40:19 crc kubenswrapper[4833]: I0127 15:40:19.962827 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" event={"ID":"1e2b3d08-c07c-4fea-ab63-2c2181719bb5","Type":"ContainerDied","Data":"17933e07924efdf593de31797ab32cf393207f2021b1e5cef482394b8fc61364"} Jan 27 15:40:19 crc kubenswrapper[4833]: I0127 15:40:19.963246 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" event={"ID":"1e2b3d08-c07c-4fea-ab63-2c2181719bb5","Type":"ContainerStarted","Data":"eb46c4dd81ac8e68edb2c9793d1bc3ee71ccb2ccddd6bbd93813ea96c86a8b18"} Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.410098 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.490077 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnn8m\" (UniqueName: \"kubernetes.io/projected/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-kube-api-access-jnn8m\") pod \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.490133 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-host\") pod \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\" (UID: \"1e2b3d08-c07c-4fea-ab63-2c2181719bb5\") " Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.490284 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-host" (OuterVolumeSpecName: "host") pod "1e2b3d08-c07c-4fea-ab63-2c2181719bb5" (UID: "1e2b3d08-c07c-4fea-ab63-2c2181719bb5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.490694 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.506231 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-kube-api-access-jnn8m" (OuterVolumeSpecName: "kube-api-access-jnn8m") pod "1e2b3d08-c07c-4fea-ab63-2c2181719bb5" (UID: "1e2b3d08-c07c-4fea-ab63-2c2181719bb5"). InnerVolumeSpecName "kube-api-access-jnn8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.591940 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnn8m\" (UniqueName: \"kubernetes.io/projected/1e2b3d08-c07c-4fea-ab63-2c2181719bb5-kube-api-access-jnn8m\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.983604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" event={"ID":"1e2b3d08-c07c-4fea-ab63-2c2181719bb5","Type":"ContainerDied","Data":"eb46c4dd81ac8e68edb2c9793d1bc3ee71ccb2ccddd6bbd93813ea96c86a8b18"} Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.983656 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb46c4dd81ac8e68edb2c9793d1bc3ee71ccb2ccddd6bbd93813ea96c86a8b18" Jan 27 15:40:21 crc kubenswrapper[4833]: I0127 15:40:21.983786 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-wtgp6" Jan 27 15:40:22 crc kubenswrapper[4833]: I0127 15:40:22.567349 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-wtgp6"] Jan 27 15:40:22 crc kubenswrapper[4833]: I0127 15:40:22.586052 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-wtgp6"] Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.211045 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:40:23 crc kubenswrapper[4833]: E0127 15:40:23.211668 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.224265 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2b3d08-c07c-4fea-ab63-2c2181719bb5" path="/var/lib/kubelet/pods/1e2b3d08-c07c-4fea-ab63-2c2181719bb5/volumes" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.710624 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-9n87d"] Jan 27 15:40:23 crc kubenswrapper[4833]: E0127 15:40:23.711013 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2b3d08-c07c-4fea-ab63-2c2181719bb5" containerName="container-00" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.711026 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2b3d08-c07c-4fea-ab63-2c2181719bb5" containerName="container-00" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.711237 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2b3d08-c07c-4fea-ab63-2c2181719bb5" containerName="container-00" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.711867 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.841709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30454f09-aea0-4acc-897f-86b14b16a250-host\") pod \"crc-debug-9n87d\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.842098 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkcz\" (UniqueName: \"kubernetes.io/projected/30454f09-aea0-4acc-897f-86b14b16a250-kube-api-access-jjkcz\") pod \"crc-debug-9n87d\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.944357 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30454f09-aea0-4acc-897f-86b14b16a250-host\") pod \"crc-debug-9n87d\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.944516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkcz\" (UniqueName: \"kubernetes.io/projected/30454f09-aea0-4acc-897f-86b14b16a250-kube-api-access-jjkcz\") pod \"crc-debug-9n87d\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.944516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30454f09-aea0-4acc-897f-86b14b16a250-host\") pod \"crc-debug-9n87d\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:23 crc kubenswrapper[4833]: I0127 15:40:23.962329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkcz\" (UniqueName: \"kubernetes.io/projected/30454f09-aea0-4acc-897f-86b14b16a250-kube-api-access-jjkcz\") pod \"crc-debug-9n87d\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:24 crc kubenswrapper[4833]: I0127 15:40:24.032551 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:24 crc kubenswrapper[4833]: W0127 15:40:24.062847 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30454f09_aea0_4acc_897f_86b14b16a250.slice/crio-f0401559e33c00946b251e64c574dad2d20cd3ee46d458b90c3019ba4f6e1efb WatchSource:0}: Error finding container f0401559e33c00946b251e64c574dad2d20cd3ee46d458b90c3019ba4f6e1efb: Status 404 returned error can't find the container with id f0401559e33c00946b251e64c574dad2d20cd3ee46d458b90c3019ba4f6e1efb Jan 27 15:40:25 crc kubenswrapper[4833]: I0127 15:40:25.014069 4833 generic.go:334] "Generic (PLEG): container finished" podID="30454f09-aea0-4acc-897f-86b14b16a250" containerID="2c31ef3c452f5e709322c8e7a81bded0160940f00bee85d504f81fc1e10aa525" exitCode=0 Jan 27 15:40:25 crc kubenswrapper[4833]: I0127 15:40:25.014141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-9n87d" event={"ID":"30454f09-aea0-4acc-897f-86b14b16a250","Type":"ContainerDied","Data":"2c31ef3c452f5e709322c8e7a81bded0160940f00bee85d504f81fc1e10aa525"} Jan 27 15:40:25 crc kubenswrapper[4833]: I0127 15:40:25.014475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/crc-debug-9n87d" event={"ID":"30454f09-aea0-4acc-897f-86b14b16a250","Type":"ContainerStarted","Data":"f0401559e33c00946b251e64c574dad2d20cd3ee46d458b90c3019ba4f6e1efb"} Jan 27 15:40:25 crc kubenswrapper[4833]: I0127 15:40:25.066683 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-9n87d"] Jan 27 15:40:25 crc kubenswrapper[4833]: I0127 15:40:25.075639 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jwx6p/crc-debug-9n87d"] Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.149799 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.289983 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjkcz\" (UniqueName: \"kubernetes.io/projected/30454f09-aea0-4acc-897f-86b14b16a250-kube-api-access-jjkcz\") pod \"30454f09-aea0-4acc-897f-86b14b16a250\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.290052 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30454f09-aea0-4acc-897f-86b14b16a250-host\") pod \"30454f09-aea0-4acc-897f-86b14b16a250\" (UID: \"30454f09-aea0-4acc-897f-86b14b16a250\") " Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.290480 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30454f09-aea0-4acc-897f-86b14b16a250-host" (OuterVolumeSpecName: "host") pod "30454f09-aea0-4acc-897f-86b14b16a250" (UID: "30454f09-aea0-4acc-897f-86b14b16a250"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.290936 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/30454f09-aea0-4acc-897f-86b14b16a250-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.296676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30454f09-aea0-4acc-897f-86b14b16a250-kube-api-access-jjkcz" (OuterVolumeSpecName: "kube-api-access-jjkcz") pod "30454f09-aea0-4acc-897f-86b14b16a250" (UID: "30454f09-aea0-4acc-897f-86b14b16a250"). InnerVolumeSpecName "kube-api-access-jjkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:40:26 crc kubenswrapper[4833]: I0127 15:40:26.392648 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjkcz\" (UniqueName: \"kubernetes.io/projected/30454f09-aea0-4acc-897f-86b14b16a250-kube-api-access-jjkcz\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:27 crc kubenswrapper[4833]: I0127 15:40:27.033233 4833 scope.go:117] "RemoveContainer" containerID="2c31ef3c452f5e709322c8e7a81bded0160940f00bee85d504f81fc1e10aa525" Jan 27 15:40:27 crc kubenswrapper[4833]: I0127 15:40:27.033266 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/crc-debug-9n87d" Jan 27 15:40:27 crc kubenswrapper[4833]: I0127 15:40:27.222933 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30454f09-aea0-4acc-897f-86b14b16a250" path="/var/lib/kubelet/pods/30454f09-aea0-4acc-897f-86b14b16a250/volumes" Jan 27 15:40:37 crc kubenswrapper[4833]: I0127 15:40:37.210729 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:40:37 crc kubenswrapper[4833]: E0127 15:40:37.211575 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.211361 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:40:50 crc kubenswrapper[4833]: E0127 15:40:50.212201 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.469782 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-689fdb796b-5m2hw_8f7651db-4775-4828-a4ff-923c39645dd0/barbican-api/0.log" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.652825 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-689fdb796b-5m2hw_8f7651db-4775-4828-a4ff-923c39645dd0/barbican-api-log/0.log" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.754409 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86457ddd7d-rqrvt_7786a9e5-715f-4bd8-9eb9-4bd199e72b92/barbican-keystone-listener/0.log" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.828362 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86457ddd7d-rqrvt_7786a9e5-715f-4bd8-9eb9-4bd199e72b92/barbican-keystone-listener-log/0.log" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.930652 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-694b98c787-zwc44_e6666db7-ad38-4773-bc1f-35667d8ea76b/barbican-worker/0.log" Jan 27 15:40:50 crc kubenswrapper[4833]: I0127 15:40:50.958781 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-694b98c787-zwc44_e6666db7-ad38-4773-bc1f-35667d8ea76b/barbican-worker-log/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.127041 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-sjwtd_2c857060-4fc1-48fd-86e9-e17957d53607/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.221641 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb7f2233-9721-49dd-b25b-eb5dcaa69303/ceilometer-central-agent/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.330673 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb7f2233-9721-49dd-b25b-eb5dcaa69303/proxy-httpd/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.331501 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb7f2233-9721-49dd-b25b-eb5dcaa69303/ceilometer-notification-agent/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.410793 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_fb7f2233-9721-49dd-b25b-eb5dcaa69303/sg-core/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.600747 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c2877c4a-1ce0-44f7-9c27-fe1819344f2e/cinder-api-log/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.606027 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c2877c4a-1ce0-44f7-9c27-fe1819344f2e/cinder-api/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.724941 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_afc7077f-8140-4b04-bcad-c7553dc1ca64/cinder-scheduler/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.914395 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8mt2p_aab88e93-fea0-4209-9335-b3ce6714babc/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:51 crc kubenswrapper[4833]: I0127 15:40:51.919944 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_afc7077f-8140-4b04-bcad-c7553dc1ca64/probe/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.100953 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jswpw_6ebc1387-6b12-4592-86d6-92fe757cfd6b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.121824 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7c8665b49f-vcpxm_a4b72676-a66b-4a41-80c1-1c634debf0ac/init/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.254870 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7c8665b49f-vcpxm_a4b72676-a66b-4a41-80c1-1c634debf0ac/init/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.385776 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-cnjsz_cdfc1174-2dd9-4ac6-9340-eb19ea8c7a68/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.458326 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7c8665b49f-vcpxm_a4b72676-a66b-4a41-80c1-1c634debf0ac/dnsmasq-dns/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.524140 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6fa86b2e-2264-479f-861c-3d03d6e5edd4/glance-httpd/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.584500 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6fa86b2e-2264-479f-861c-3d03d6e5edd4/glance-log/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.702471 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ab7d4e94-9da8-4042-a857-27194e95d788/glance-log/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.726284 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ab7d4e94-9da8-4042-a857-27194e95d788/glance-httpd/0.log" Jan 27 15:40:52 crc kubenswrapper[4833]: I0127 15:40:52.960874 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cd9489696-52kzm_534c5b75-240a-4ded-bb13-f05eb3158527/horizon/0.log" Jan 27 15:40:53 crc kubenswrapper[4833]: I0127 15:40:53.112506 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tj7tq_90180c07-3b18-4ce7-ae5b-c3288c171195/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:53 crc kubenswrapper[4833]: I0127 15:40:53.264789 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g4k96_6c84c399-b62e-4a5d-94b1-b5e186b20a93/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:53 crc kubenswrapper[4833]: I0127 15:40:53.603564 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492101-zch2v_4186bda5-641e-4266-8207-2618451a497e/keystone-cron/0.log" Jan 27 15:40:53 crc kubenswrapper[4833]: I0127 15:40:53.705454 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cd9489696-52kzm_534c5b75-240a-4ded-bb13-f05eb3158527/horizon-log/0.log" Jan 27 15:40:53 crc kubenswrapper[4833]: I0127 15:40:53.844605 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f38cf44c-23a5-428d-8598-ec073d2148bf/kube-state-metrics/0.log" Jan 27 15:40:53 crc kubenswrapper[4833]: I0127 15:40:53.880541 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-68475b756f-rzm75_fef33f59-fb9d-49e0-b9fb-70636656f7c7/keystone-api/0.log" Jan 27 15:40:54 crc kubenswrapper[4833]: I0127 15:40:54.046039 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-mg569_f18f58a2-a7ce-4714-838e-47e089f59cff/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:54 crc kubenswrapper[4833]: I0127 15:40:54.560300 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bdcd59f97-6hlst_0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5/neutron-httpd/0.log" Jan 27 15:40:54 crc kubenswrapper[4833]: I0127 15:40:54.602879 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bdcd59f97-6hlst_0995eaa4-deb4-4bd9-ae7e-bc0547ff06c5/neutron-api/0.log" Jan 27 15:40:54 crc kubenswrapper[4833]: I0127 15:40:54.665592 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-dn2tm_01ef9ac5-6b63-4441-ab5e-d700019bbe30/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:55 crc kubenswrapper[4833]: I0127 15:40:55.664099 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2bcf5035-0b64-4321-b0e7-67ec66e543b9/nova-cell0-conductor-conductor/0.log" Jan 27 15:40:55 crc kubenswrapper[4833]: I0127 15:40:55.774092 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_50858935-78fe-4d2d-8390-20a264e996f3/nova-api-log/0.log" Jan 27 15:40:55 crc kubenswrapper[4833]: I0127 15:40:55.793923 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_34ce5a19-5697-4b62-8ec7-220c35fb2123/nova-cell1-conductor-conductor/0.log" Jan 27 15:40:56 crc kubenswrapper[4833]: I0127 15:40:56.185695 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_50858935-78fe-4d2d-8390-20a264e996f3/nova-api-api/0.log" Jan 27 15:40:56 crc kubenswrapper[4833]: I0127 15:40:56.194321 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_81be4ddd-ca35-4605-a032-96d22c32ffca/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 15:40:56 crc kubenswrapper[4833]: I0127 15:40:56.200628 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-n889r_f68646f8-f595-44de-898d-94a98ffb6408/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:56 crc kubenswrapper[4833]: I0127 15:40:56.451274 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_57de8349-e7e0-4ad3-ad99-8ba55b963447/nova-metadata-log/0.log" Jan 27 15:40:56 crc kubenswrapper[4833]: I0127 15:40:56.770999 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_08412a30-4bd0-44dc-a2c8-09db6cf3fc9a/nova-scheduler-scheduler/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.058632 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4a22294-2921-44ee-bdf0-41c631d2962c/mysql-bootstrap/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.176293 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4a22294-2921-44ee-bdf0-41c631d2962c/mysql-bootstrap/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.276683 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a4a22294-2921-44ee-bdf0-41c631d2962c/galera/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.409161 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c797d35-1d15-4eed-a88c-20fd3aa64b91/mysql-bootstrap/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.582548 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c797d35-1d15-4eed-a88c-20fd3aa64b91/mysql-bootstrap/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.648500 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c797d35-1d15-4eed-a88c-20fd3aa64b91/galera/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.779741 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6a29332d-3473-40f9-950e-9cd9249a37aa/openstackclient/0.log" Jan 27 15:40:57 crc kubenswrapper[4833]: I0127 15:40:57.843755 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6584j_71bea80e-a86d-40c6-b72f-9bab663cc6ea/ovn-controller/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.067566 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-695lh_a8938ddd-5e46-4314-af72-13a83905b6c4/openstack-network-exporter/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.223808 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xc7c_9c97e6a0-c4b8-4f8d-ac90-d28e93a48030/ovsdb-server-init/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.442389 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xc7c_9c97e6a0-c4b8-4f8d-ac90-d28e93a48030/ovsdb-server-init/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.454275 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_57de8349-e7e0-4ad3-ad99-8ba55b963447/nova-metadata-metadata/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.457068 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xc7c_9c97e6a0-c4b8-4f8d-ac90-d28e93a48030/ovsdb-server/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.462814 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9xc7c_9c97e6a0-c4b8-4f8d-ac90-d28e93a48030/ovs-vswitchd/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.641174 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a76372e9-2f46-4b52-9c11-842331d4357f/openstack-network-exporter/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.721952 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ng8m9_0d6b0751-b1bd-44ed-b6df-942f63c8b191/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.867409 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a76372e9-2f46-4b52-9c11-842331d4357f/ovn-northd/0.log" Jan 27 15:40:58 crc kubenswrapper[4833]: I0127 15:40:58.904514 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1/openstack-network-exporter/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.045017 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4fbbf9ae-c9e5-4bcf-9389-52eb0b7d5ae1/ovsdbserver-nb/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.146838 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c/openstack-network-exporter/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.230157 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_22b3a12f-a1eb-43c2-8ce8-3d0aa9b8d99c/ovsdbserver-sb/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.526140 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9898dddd-efe1-4386-a386-723cd4e3b1e9/init-config-reloader/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.565614 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-cd7556dcb-c7h4r_cc2a96c2-908d-48f8-97c7-bc4a59f1caff/placement-api/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.613335 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-cd7556dcb-c7h4r_cc2a96c2-908d-48f8-97c7-bc4a59f1caff/placement-log/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.765783 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9898dddd-efe1-4386-a386-723cd4e3b1e9/init-config-reloader/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.773255 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9898dddd-efe1-4386-a386-723cd4e3b1e9/config-reloader/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.786360 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bzwvv"] Jan 27 15:40:59 crc kubenswrapper[4833]: E0127 15:40:59.787932 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30454f09-aea0-4acc-897f-86b14b16a250" containerName="container-00" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.787955 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="30454f09-aea0-4acc-897f-86b14b16a250" containerName="container-00" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.788204 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="30454f09-aea0-4acc-897f-86b14b16a250" containerName="container-00" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.789735 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.799981 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzwvv"] Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.884421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-catalog-content\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.884502 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/7421ba31-af13-46c6-91c7-1f271d8d25e0-kube-api-access-sblrx\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.884695 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9898dddd-efe1-4386-a386-723cd4e3b1e9/thanos-sidecar/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.884835 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-utilities\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.920598 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_9898dddd-efe1-4386-a386-723cd4e3b1e9/prometheus/0.log" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.988735 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-catalog-content\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.988799 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/7421ba31-af13-46c6-91c7-1f271d8d25e0-kube-api-access-sblrx\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.988926 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-utilities\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.989322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-catalog-content\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:40:59 crc kubenswrapper[4833]: I0127 15:40:59.989357 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-utilities\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.009545 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/7421ba31-af13-46c6-91c7-1f271d8d25e0-kube-api-access-sblrx\") pod \"redhat-operators-bzwvv\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.091840 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_90d23272-4189-49d5-9df5-e7347a122434/setup-container/0.log" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.118510 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.421114 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_90d23272-4189-49d5-9df5-e7347a122434/setup-container/0.log" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.507283 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_90d23272-4189-49d5-9df5-e7347a122434/rabbitmq/0.log" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.587088 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b3d13b73-8eab-4c26-abe0-bdda094d795b/setup-container/0.log" Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.698301 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzwvv"] Jan 27 15:41:00 crc kubenswrapper[4833]: I0127 15:41:00.871192 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b3d13b73-8eab-4c26-abe0-bdda094d795b/setup-container/0.log" Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.071648 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b3d13b73-8eab-4c26-abe0-bdda094d795b/rabbitmq/0.log" Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.167788 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fcn5f_0209691f-9aa1-4c9e-abeb-682686b65cb5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.368140 4833 generic.go:334] "Generic (PLEG): container finished" podID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerID="0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270" exitCode=0 Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.368203 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerDied","Data":"0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270"} Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.368232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerStarted","Data":"d313a81e150453f8c956c846a113fbc0e88e830eb5a45dbd70bc57af3aa732f6"} Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.440973 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2fsrk_7a00a943-178b-4719-b687-d8dc678f41bd/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.553381 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-9jzgp_4ac48901-049e-4506-b266-fa322c384c6b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:41:01 crc kubenswrapper[4833]: I0127 15:41:01.979247 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xcw7g_5eea495f-1339-43a0-9ec7-b50211d609d2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.101678 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ntzvk_a4cf7b42-9be9-4e04-853a-a7f0e40edfe3/ssh-known-hosts-edpm-deployment/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.210130 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:41:02 crc kubenswrapper[4833]: E0127 15:41:02.210545 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.260216 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d4ff448c-rqtwt_dae07283-7914-42d0-be9e-93d61eb88267/proxy-server/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.357707 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-4cbg4_11a66058-c0bb-4357-a752-8823939d7ee3/swift-ring-rebalance/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.496809 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d4ff448c-rqtwt_dae07283-7914-42d0-be9e-93d61eb88267/proxy-httpd/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.560588 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/account-auditor/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.632022 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/account-reaper/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.789402 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/account-server/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.841626 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/container-auditor/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.849171 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/account-replicator/0.log" Jan 27 15:41:02 crc kubenswrapper[4833]: I0127 15:41:02.943561 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/container-replicator/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.039879 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/container-server/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.106001 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/container-updater/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.186175 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/object-expirer/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.228439 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_cf31f7d3-86f9-490a-806a-3944c7d60c10/memcached/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.262954 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/object-auditor/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.453202 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/object-replicator/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.508857 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/object-server/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.577598 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/object-updater/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.876044 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/swift-recon-cron/0.log" Jan 27 15:41:03 crc kubenswrapper[4833]: I0127 15:41:03.961294 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df43a2ef-c36c-4b08-bee6-6820e443220c/rsync/0.log" Jan 27 15:41:04 crc kubenswrapper[4833]: I0127 15:41:04.021539 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4fh9n_de21b54e-baa7-4329-afa1-44caba34567e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:41:04 crc kubenswrapper[4833]: I0127 15:41:04.174547 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c501d21c-94f8-4a2f-966c-65c04b362809/test-operator-logs-container/0.log" Jan 27 15:41:04 crc kubenswrapper[4833]: I0127 15:41:04.186753 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5faf0556-c6da-4c93-9ce7-f02ed716c092/tempest-tests-tempest-tests-runner/0.log" Jan 27 15:41:04 crc kubenswrapper[4833]: I0127 15:41:04.278057 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-899ks_6b8c7eee-cf33-40b8-82b9-e88287b52d3a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:41:04 crc kubenswrapper[4833]: I0127 15:41:04.405972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerStarted","Data":"858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf"} Jan 27 15:41:04 crc kubenswrapper[4833]: I0127 15:41:04.976540 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_5265f8e8-b9f5-471d-876e-3d8ebe3ec895/watcher-applier/0.log" Jan 27 15:41:05 crc kubenswrapper[4833]: I0127 15:41:05.377234 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_68ad57c5-23b4-4243-9700-f67937e1378d/watcher-api-log/0.log" Jan 27 15:41:05 crc kubenswrapper[4833]: I0127 15:41:05.821929 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_099ff0ad-2eba-43e9-99e3-86fb4b476793/watcher-decision-engine/0.log" Jan 27 15:41:07 crc kubenswrapper[4833]: I0127 15:41:07.513142 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_68ad57c5-23b4-4243-9700-f67937e1378d/watcher-api/0.log" Jan 27 15:41:09 crc kubenswrapper[4833]: I0127 15:41:09.454620 4833 generic.go:334] "Generic (PLEG): container finished" podID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerID="858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf" exitCode=0 Jan 27 15:41:09 crc kubenswrapper[4833]: I0127 15:41:09.454656 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerDied","Data":"858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf"} Jan 27 15:41:10 crc kubenswrapper[4833]: I0127 15:41:10.466340 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerStarted","Data":"bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8"} Jan 27 15:41:10 crc kubenswrapper[4833]: I0127 15:41:10.501318 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bzwvv" podStartSLOduration=3.005802832 podStartE2EDuration="11.501290768s" podCreationTimestamp="2026-01-27 15:40:59 +0000 UTC" firstStartedPulling="2026-01-27 15:41:01.369631746 +0000 UTC m=+5363.020956148" lastFinishedPulling="2026-01-27 15:41:09.865119682 +0000 UTC m=+5371.516444084" observedRunningTime="2026-01-27 15:41:10.492870132 +0000 UTC m=+5372.144194544" watchObservedRunningTime="2026-01-27 15:41:10.501290768 +0000 UTC m=+5372.152615190" Jan 27 15:41:15 crc kubenswrapper[4833]: I0127 15:41:15.217811 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:41:15 crc kubenswrapper[4833]: I0127 15:41:15.515252 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"19fbad5f8e4cbbd62ed98793e52d0479ed3787996719f51e74a8f45e901e9012"} Jan 27 15:41:20 crc kubenswrapper[4833]: I0127 15:41:20.119496 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:20 crc kubenswrapper[4833]: I0127 15:41:20.120114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:20 crc kubenswrapper[4833]: I0127 15:41:20.187597 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:20 crc kubenswrapper[4833]: I0127 15:41:20.627798 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:20 crc kubenswrapper[4833]: I0127 15:41:20.698322 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzwvv"] Jan 27 15:41:22 crc kubenswrapper[4833]: I0127 15:41:22.581001 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bzwvv" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="registry-server" containerID="cri-o://bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8" gracePeriod=2 Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.089608 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.252250 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/7421ba31-af13-46c6-91c7-1f271d8d25e0-kube-api-access-sblrx\") pod \"7421ba31-af13-46c6-91c7-1f271d8d25e0\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.252490 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-catalog-content\") pod \"7421ba31-af13-46c6-91c7-1f271d8d25e0\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.252526 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-utilities\") pod \"7421ba31-af13-46c6-91c7-1f271d8d25e0\" (UID: \"7421ba31-af13-46c6-91c7-1f271d8d25e0\") " Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.253506 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-utilities" (OuterVolumeSpecName: "utilities") pod "7421ba31-af13-46c6-91c7-1f271d8d25e0" (UID: "7421ba31-af13-46c6-91c7-1f271d8d25e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.311777 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7421ba31-af13-46c6-91c7-1f271d8d25e0-kube-api-access-sblrx" (OuterVolumeSpecName: "kube-api-access-sblrx") pod "7421ba31-af13-46c6-91c7-1f271d8d25e0" (UID: "7421ba31-af13-46c6-91c7-1f271d8d25e0"). InnerVolumeSpecName "kube-api-access-sblrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.355399 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/7421ba31-af13-46c6-91c7-1f271d8d25e0-kube-api-access-sblrx\") on node \"crc\" DevicePath \"\"" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.355439 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.378698 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7421ba31-af13-46c6-91c7-1f271d8d25e0" (UID: "7421ba31-af13-46c6-91c7-1f271d8d25e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.457606 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7421ba31-af13-46c6-91c7-1f271d8d25e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.606990 4833 generic.go:334] "Generic (PLEG): container finished" podID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerID="bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8" exitCode=0 Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.607042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerDied","Data":"bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8"} Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.607078 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzwvv" event={"ID":"7421ba31-af13-46c6-91c7-1f271d8d25e0","Type":"ContainerDied","Data":"d313a81e150453f8c956c846a113fbc0e88e830eb5a45dbd70bc57af3aa732f6"} Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.607099 4833 scope.go:117] "RemoveContainer" containerID="bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.607251 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzwvv" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.644147 4833 scope.go:117] "RemoveContainer" containerID="858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.648931 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzwvv"] Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.661380 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bzwvv"] Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.675042 4833 scope.go:117] "RemoveContainer" containerID="0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.716293 4833 scope.go:117] "RemoveContainer" containerID="bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8" Jan 27 15:41:23 crc kubenswrapper[4833]: E0127 15:41:23.716736 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8\": container with ID starting with bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8 not found: ID does not exist" containerID="bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.716775 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8"} err="failed to get container status \"bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8\": rpc error: code = NotFound desc = could not find container \"bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8\": container with ID starting with bf8a058ed335b029182a7095f09a5eb75f7319185946615e22ef260cfffbefb8 not found: ID does not exist" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.716827 4833 scope.go:117] "RemoveContainer" containerID="858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf" Jan 27 15:41:23 crc kubenswrapper[4833]: E0127 15:41:23.717123 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf\": container with ID starting with 858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf not found: ID does not exist" containerID="858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.717149 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf"} err="failed to get container status \"858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf\": rpc error: code = NotFound desc = could not find container \"858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf\": container with ID starting with 858f9bf366ce6fde905a7f841d3ce2e705ac29543b775b5cc8d1656ab9993cbf not found: ID does not exist" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.717170 4833 scope.go:117] "RemoveContainer" containerID="0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270" Jan 27 15:41:23 crc kubenswrapper[4833]: E0127 15:41:23.717381 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270\": container with ID starting with 0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270 not found: ID does not exist" containerID="0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270" Jan 27 15:41:23 crc kubenswrapper[4833]: I0127 15:41:23.717408 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270"} err="failed to get container status \"0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270\": rpc error: code = NotFound desc = could not find container \"0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270\": container with ID starting with 0d5e56db916f196883986a26f63f37859bc29a9cca304816ad4d1fc51e720270 not found: ID does not exist" Jan 27 15:41:25 crc kubenswrapper[4833]: I0127 15:41:25.223844 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" path="/var/lib/kubelet/pods/7421ba31-af13-46c6-91c7-1f271d8d25e0/volumes" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.106948 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/util/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.252399 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/util/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.295168 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/pull/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.365605 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/pull/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.485867 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/extract/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.489872 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/pull/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.533260 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0443dc5685edc44ab67fc0f0e211ac42496abf7be91933f469d23fe1a7kxq7m_f63f4475-df09-4e45-b77e-4f498ea12af7/util/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.729299 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-kp86b_e74c1bee-e2b8-4b35-8ced-7832d9c1a824/manager/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.885695 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ftwtq_edb43be5-768f-4843-be5d-9826aa2e1a11/manager/0.log" Jan 27 15:41:36 crc kubenswrapper[4833]: I0127 15:41:36.937665 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-wwtlj_253c59e7-dd33-4606-bd6e-21763472c862/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.167771 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-qg9zd_f8378dce-4e90-4373-94ca-bd0420827dea/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.182992 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fl5ld_aeeb554e-7369-4b95-8583-8e4b083e953c/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.369230 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-8tj4r_ada55264-0fda-4f30-92a7-28add3873740/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.708153 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-j2jq9_4c6e97a8-b1f3-4c3f-a9fb-7dc03163a2b0/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.712641 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-9wxhs_a9e13c49-33ca-4c87-9ef9-ae446cfb519e/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.876951 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-k2wt9_a02f7d35-b75c-44f5-ad70-f08b553de32c/manager/0.log" Jan 27 15:41:37 crc kubenswrapper[4833]: I0127 15:41:37.982932 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-92hpt_e1a8680b-2bd2-43c7-839c-8b2b899a953b/manager/0.log" Jan 27 15:41:38 crc kubenswrapper[4833]: I0127 15:41:38.171397 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-wp9jt_81067808-b0be-41f6-a1f3-462cb917996b/manager/0.log" Jan 27 15:41:38 crc kubenswrapper[4833]: I0127 15:41:38.273111 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-r56wb_f8d76fd0-1a35-4848-8e19-611f437c0b2e/manager/0.log" Jan 27 15:41:38 crc kubenswrapper[4833]: I0127 15:41:38.438932 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-tpkqs_7a907425-198b-4c21-b16c-55d94617275f/manager/0.log" Jan 27 15:41:38 crc kubenswrapper[4833]: I0127 15:41:38.472701 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-x5tlj_dac55dc4-5aae-4281-a74f-10260dd5b1ac/manager/0.log" Jan 27 15:41:38 crc kubenswrapper[4833]: I0127 15:41:38.697180 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854flztk_a65c2925-9923-40c5-aba0-b9342b6dab40/manager/0.log" Jan 27 15:41:38 crc kubenswrapper[4833]: I0127 15:41:38.850330 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-77d48dd9c-9hsqq_6d6c980b-f32a-45f8-92ef-b7d6acd5d5cd/operator/0.log" Jan 27 15:41:39 crc kubenswrapper[4833]: I0127 15:41:39.266551 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jbnzz_d71f211e-9209-4c1c-891c-dd802162ec4a/registry-server/0.log" Jan 27 15:41:39 crc kubenswrapper[4833]: I0127 15:41:39.495982 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-tbb5q_fcf3c608-e55e-490d-9a69-0f00d7fef3fd/manager/0.log" Jan 27 15:41:39 crc kubenswrapper[4833]: I0127 15:41:39.611171 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-pdtk9_c9891074-5241-4f51-8a9d-b28240983c3a/manager/0.log" Jan 27 15:41:39 crc kubenswrapper[4833]: I0127 15:41:39.779237 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5zxfp_906c1999-d97c-4a87-b1d0-8d06bef0b396/operator/0.log" Jan 27 15:41:39 crc kubenswrapper[4833]: I0127 15:41:39.966541 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6bmlg_b304e371-f956-4d28-bb4e-3a1a9ae3e860/manager/0.log" Jan 27 15:41:40 crc kubenswrapper[4833]: I0127 15:41:40.234167 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-b5xnd_df55c968-3acc-4ffc-a674-bda364677610/manager/0.log" Jan 27 15:41:40 crc kubenswrapper[4833]: I0127 15:41:40.363568 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-zcgrn_90b9f19a-9a04-408e-ad25-eeccd712b2d3/manager/0.log" Jan 27 15:41:40 crc kubenswrapper[4833]: I0127 15:41:40.462352 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-86c96597bf-79g5g_56cd2b8a-9c8e-44c8-a62e-5672b19c9d3d/manager/0.log" Jan 27 15:41:40 crc kubenswrapper[4833]: I0127 15:41:40.481337 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f7cb759dd-k25f5_70855ffd-2b62-4761-a9ac-b944d0e1115a/manager/0.log" Jan 27 15:42:02 crc kubenswrapper[4833]: I0127 15:42:02.591375 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gmnmv_9611f809-5d0e-47d9-90ad-b0799b4b786b/control-plane-machine-set-operator/0.log" Jan 27 15:42:02 crc kubenswrapper[4833]: I0127 15:42:02.759615 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wblbf_6d523e68-740e-4514-a2eb-40ada703a657/kube-rbac-proxy/0.log" Jan 27 15:42:02 crc kubenswrapper[4833]: I0127 15:42:02.790740 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wblbf_6d523e68-740e-4514-a2eb-40ada703a657/machine-api-operator/0.log" Jan 27 15:42:15 crc kubenswrapper[4833]: I0127 15:42:15.435494 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cbksb_6090a3c8-feca-42f0-bafb-0886ef1a591a/cert-manager-controller/0.log" Jan 27 15:42:15 crc kubenswrapper[4833]: I0127 15:42:15.554286 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-trpwd_d9f55934-05a1-4c77-b428-337683dfb09c/cert-manager-cainjector/0.log" Jan 27 15:42:15 crc kubenswrapper[4833]: I0127 15:42:15.577265 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gwjhk_efb6fbe6-fd59-4072-ac9c-99534d8d97e5/cert-manager-webhook/0.log" Jan 27 15:42:29 crc kubenswrapper[4833]: I0127 15:42:29.267935 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-srxsb_b3435cdb-b0c8-4961-9cdb-cf92c4c03c01/nmstate-console-plugin/0.log" Jan 27 15:42:29 crc kubenswrapper[4833]: I0127 15:42:29.444956 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8fprr_845eae34-dda9-41fe-8388-891597f61a06/nmstate-handler/0.log" Jan 27 15:42:29 crc kubenswrapper[4833]: I0127 15:42:29.457417 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-424jq_088ff1e3-c56b-4336-a7ba-3728a0f1a9ff/kube-rbac-proxy/0.log" Jan 27 15:42:29 crc kubenswrapper[4833]: I0127 15:42:29.541493 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-424jq_088ff1e3-c56b-4336-a7ba-3728a0f1a9ff/nmstate-metrics/0.log" Jan 27 15:42:29 crc kubenswrapper[4833]: I0127 15:42:29.617124 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-bmcd8_a65650e4-9186-4e0a-a896-d372f80b1843/nmstate-operator/0.log" Jan 27 15:42:29 crc kubenswrapper[4833]: I0127 15:42:29.714788 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hfmfn_8151e9d2-3b0b-494c-b563-3d0615d6d513/nmstate-webhook/0.log" Jan 27 15:42:43 crc kubenswrapper[4833]: I0127 15:42:43.509748 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bhffj_4c00dc67-8748-49eb-ae3c-8ffeb7bbab98/prometheus-operator/0.log" Jan 27 15:42:43 crc kubenswrapper[4833]: I0127 15:42:43.707788 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f86595b7-7jp89_86100396-5b44-496c-b05b-b39fbe052fa8/prometheus-operator-admission-webhook/0.log" Jan 27 15:42:43 crc kubenswrapper[4833]: I0127 15:42:43.760361 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv_29b2f4e4-a61f-4173-9297-ef6d1d46330a/prometheus-operator-admission-webhook/0.log" Jan 27 15:42:43 crc kubenswrapper[4833]: I0127 15:42:43.902545 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wp7pq_c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3/operator/0.log" Jan 27 15:42:43 crc kubenswrapper[4833]: I0127 15:42:43.979623 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7hv4r_2302ff6f-4d3a-4517-a8e7-4de46e9456c1/perses-operator/0.log" Jan 27 15:42:56 crc kubenswrapper[4833]: I0127 15:42:56.995571 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-swvk9_88a8bf6c-e725-44ef-8a09-03ff49fa1546/kube-rbac-proxy/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.191562 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-swvk9_88a8bf6c-e725-44ef-8a09-03ff49fa1546/controller/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.244584 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-frr-files/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.414664 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-reloader/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.456854 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-metrics/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.471190 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-frr-files/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.484675 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-reloader/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.720265 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-reloader/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.728085 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-metrics/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.737835 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-metrics/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.737983 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-frr-files/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.898810 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-reloader/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.943562 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-metrics/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.943678 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/cp-frr-files/0.log" Jan 27 15:42:57 crc kubenswrapper[4833]: I0127 15:42:57.979410 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/controller/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.147757 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/frr-metrics/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.198555 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/kube-rbac-proxy/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.245068 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/kube-rbac-proxy-frr/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.363767 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/reloader/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.476008 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-lxtf8_1ad3206d-30a8-4d32-9e9f-b6e04c016001/frr-k8s-webhook-server/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.660276 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6984b8c5f8-9xnnw_1d930703-d737-4e5f-bc0d-8458cf05c635/manager/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.810547 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56b8697675-bd2zg_b90aca3e-1c5b-42d9-a04c-a40ac98e9521/webhook-server/0.log" Jan 27 15:42:58 crc kubenswrapper[4833]: I0127 15:42:58.956607 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-s7g75_312df926-5f4b-4dee-b49f-ab00e0748a8d/kube-rbac-proxy/0.log" Jan 27 15:42:59 crc kubenswrapper[4833]: I0127 15:42:59.699274 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-s7g75_312df926-5f4b-4dee-b49f-ab00e0748a8d/speaker/0.log" Jan 27 15:42:59 crc kubenswrapper[4833]: I0127 15:42:59.903352 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tvh76_08cd7e0d-fb76-4bad-b82f-1ce499053722/frr/0.log" Jan 27 15:43:11 crc kubenswrapper[4833]: I0127 15:43:11.658998 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/util/0.log" Jan 27 15:43:11 crc kubenswrapper[4833]: I0127 15:43:11.909674 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/util/0.log" Jan 27 15:43:11 crc kubenswrapper[4833]: I0127 15:43:11.988633 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/pull/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.005217 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/pull/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.163253 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/util/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.191255 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/extract/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.227826 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7nq2w_e8423c47-b673-4d7f-ace2-68f58b293b5d/pull/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.369919 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/util/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.575854 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/util/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.591917 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/pull/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.634603 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/pull/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.756207 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/util/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.809772 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/pull/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.811913 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7138m9k7_dfd1a4fc-5465-4778-9c41-4be0cf541237/extract/0.log" Jan 27 15:43:12 crc kubenswrapper[4833]: I0127 15:43:12.959496 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/util/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.178374 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/pull/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.200369 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/util/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.201788 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/pull/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.364982 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/pull/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.382551 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/extract/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.392344 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z7h5v_019c9be7-9fb3-48c8-97e6-fe7463d16b34/util/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.528885 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/extract-utilities/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.749846 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/extract-content/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.758185 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/extract-content/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.767100 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/extract-utilities/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.970065 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/extract-utilities/0.log" Jan 27 15:43:13 crc kubenswrapper[4833]: I0127 15:43:13.982809 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/extract-content/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.175312 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/extract-utilities/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.445935 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/extract-content/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.467289 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/extract-content/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.587800 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/extract-utilities/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.671496 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hk97g_51526a16-f6c1-4cda-935a-49ad10c53a33/registry-server/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.760048 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/extract-content/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.778010 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/extract-utilities/0.log" Jan 27 15:43:14 crc kubenswrapper[4833]: I0127 15:43:14.988200 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-spsms_3fda7467-b273-42f8-a470-89697e7b7a53/marketplace-operator/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.191394 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/extract-utilities/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.318483 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cx57v_ace8eea4-3108-4f42-ab6c-1b8a2c6b2980/registry-server/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.429973 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/extract-utilities/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.430232 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/extract-content/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.461692 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/extract-content/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.624339 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/extract-utilities/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.635076 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/extract-content/0.log" Jan 27 15:43:15 crc kubenswrapper[4833]: I0127 15:43:15.816198 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/extract-utilities/0.log" Jan 27 15:43:16 crc kubenswrapper[4833]: I0127 15:43:16.035582 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/extract-content/0.log" Jan 27 15:43:16 crc kubenswrapper[4833]: I0127 15:43:16.037597 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p6ft9_3275450d-1578-4d06-955f-c667eadb6a3b/registry-server/0.log" Jan 27 15:43:16 crc kubenswrapper[4833]: I0127 15:43:16.055583 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/extract-content/0.log" Jan 27 15:43:16 crc kubenswrapper[4833]: I0127 15:43:16.067678 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/extract-utilities/0.log" Jan 27 15:43:16 crc kubenswrapper[4833]: I0127 15:43:16.249939 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/extract-content/0.log" Jan 27 15:43:16 crc kubenswrapper[4833]: I0127 15:43:16.253210 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/extract-utilities/0.log" Jan 27 15:43:17 crc kubenswrapper[4833]: I0127 15:43:16.999805 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fnl_dd818cb4-af75-444f-8e33-25cf79769b03/registry-server/0.log" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.915345 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pfm96"] Jan 27 15:43:23 crc kubenswrapper[4833]: E0127 15:43:23.919748 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="extract-content" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.919858 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="extract-content" Jan 27 15:43:23 crc kubenswrapper[4833]: E0127 15:43:23.919936 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="registry-server" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.920016 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="registry-server" Jan 27 15:43:23 crc kubenswrapper[4833]: E0127 15:43:23.920112 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="extract-utilities" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.920172 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="extract-utilities" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.920612 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7421ba31-af13-46c6-91c7-1f271d8d25e0" containerName="registry-server" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.922292 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.927021 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfm96"] Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.946484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrkx\" (UniqueName: \"kubernetes.io/projected/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-kube-api-access-xmrkx\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.946672 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-catalog-content\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:23 crc kubenswrapper[4833]: I0127 15:43:23.946708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-utilities\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.048958 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmrkx\" (UniqueName: \"kubernetes.io/projected/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-kube-api-access-xmrkx\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.049407 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-catalog-content\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.049548 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-utilities\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.050149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-catalog-content\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.050278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-utilities\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.071047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmrkx\" (UniqueName: \"kubernetes.io/projected/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-kube-api-access-xmrkx\") pod \"community-operators-pfm96\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.254645 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:24 crc kubenswrapper[4833]: I0127 15:43:24.968784 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfm96"] Jan 27 15:43:25 crc kubenswrapper[4833]: I0127 15:43:25.786025 4833 generic.go:334] "Generic (PLEG): container finished" podID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerID="ec985c955ba1209d1a01e89c9f8fe63c5451cf2b19e5cd6d944ca5a41cd929c6" exitCode=0 Jan 27 15:43:25 crc kubenswrapper[4833]: I0127 15:43:25.786080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerDied","Data":"ec985c955ba1209d1a01e89c9f8fe63c5451cf2b19e5cd6d944ca5a41cd929c6"} Jan 27 15:43:25 crc kubenswrapper[4833]: I0127 15:43:25.786329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerStarted","Data":"31e6b2ab717c356247055d35c04ad438a7da6260b113e957d4698910a54a7b3c"} Jan 27 15:43:25 crc kubenswrapper[4833]: I0127 15:43:25.788575 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:43:26 crc kubenswrapper[4833]: I0127 15:43:26.795885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerStarted","Data":"5949e88849a3cf2acb1f449b80e284cb8b6ea1d8e8d13df9d81e7db29c3146da"} Jan 27 15:43:27 crc kubenswrapper[4833]: I0127 15:43:27.807095 4833 generic.go:334] "Generic (PLEG): container finished" podID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerID="5949e88849a3cf2acb1f449b80e284cb8b6ea1d8e8d13df9d81e7db29c3146da" exitCode=0 Jan 27 15:43:27 crc kubenswrapper[4833]: I0127 15:43:27.807193 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerDied","Data":"5949e88849a3cf2acb1f449b80e284cb8b6ea1d8e8d13df9d81e7db29c3146da"} Jan 27 15:43:28 crc kubenswrapper[4833]: I0127 15:43:28.818974 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerStarted","Data":"f74949579f24cd9e48f9f0427225e300254c4ef9cf14cd7850b2625b85f9c54c"} Jan 27 15:43:28 crc kubenswrapper[4833]: I0127 15:43:28.841914 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pfm96" podStartSLOduration=3.435700552 podStartE2EDuration="5.841889005s" podCreationTimestamp="2026-01-27 15:43:23 +0000 UTC" firstStartedPulling="2026-01-27 15:43:25.788217369 +0000 UTC m=+5507.439541771" lastFinishedPulling="2026-01-27 15:43:28.194405832 +0000 UTC m=+5509.845730224" observedRunningTime="2026-01-27 15:43:28.833973461 +0000 UTC m=+5510.485297873" watchObservedRunningTime="2026-01-27 15:43:28.841889005 +0000 UTC m=+5510.493213417" Jan 27 15:43:31 crc kubenswrapper[4833]: I0127 15:43:31.027842 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bhffj_4c00dc67-8748-49eb-ae3c-8ffeb7bbab98/prometheus-operator/0.log" Jan 27 15:43:31 crc kubenswrapper[4833]: I0127 15:43:31.064583 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f86595b7-x9wdv_29b2f4e4-a61f-4173-9297-ef6d1d46330a/prometheus-operator-admission-webhook/0.log" Jan 27 15:43:31 crc kubenswrapper[4833]: I0127 15:43:31.134924 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6f86595b7-7jp89_86100396-5b44-496c-b05b-b39fbe052fa8/prometheus-operator-admission-webhook/0.log" Jan 27 15:43:31 crc kubenswrapper[4833]: I0127 15:43:31.288138 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7hv4r_2302ff6f-4d3a-4517-a8e7-4de46e9456c1/perses-operator/0.log" Jan 27 15:43:31 crc kubenswrapper[4833]: I0127 15:43:31.322710 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wp7pq_c15ae3a9-dcc0-4ce5-b3ff-9e4bd72b09a3/operator/0.log" Jan 27 15:43:32 crc kubenswrapper[4833]: I0127 15:43:32.260359 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:43:32 crc kubenswrapper[4833]: I0127 15:43:32.260427 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:43:34 crc kubenswrapper[4833]: I0127 15:43:34.255391 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:34 crc kubenswrapper[4833]: I0127 15:43:34.255878 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:34 crc kubenswrapper[4833]: I0127 15:43:34.310056 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:34 crc kubenswrapper[4833]: I0127 15:43:34.920091 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:34 crc kubenswrapper[4833]: I0127 15:43:34.966544 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfm96"] Jan 27 15:43:36 crc kubenswrapper[4833]: I0127 15:43:36.891380 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pfm96" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="registry-server" containerID="cri-o://f74949579f24cd9e48f9f0427225e300254c4ef9cf14cd7850b2625b85f9c54c" gracePeriod=2 Jan 27 15:43:37 crc kubenswrapper[4833]: I0127 15:43:37.901300 4833 generic.go:334] "Generic (PLEG): container finished" podID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerID="f74949579f24cd9e48f9f0427225e300254c4ef9cf14cd7850b2625b85f9c54c" exitCode=0 Jan 27 15:43:37 crc kubenswrapper[4833]: I0127 15:43:37.901397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerDied","Data":"f74949579f24cd9e48f9f0427225e300254c4ef9cf14cd7850b2625b85f9c54c"} Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.250779 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.365131 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-utilities\") pod \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.365364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-catalog-content\") pod \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.365393 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmrkx\" (UniqueName: \"kubernetes.io/projected/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-kube-api-access-xmrkx\") pod \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\" (UID: \"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a\") " Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.366260 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-utilities" (OuterVolumeSpecName: "utilities") pod "99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" (UID: "99c7ccd7-5915-4b6d-a14c-92c3bd710d4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.366873 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.377860 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-kube-api-access-xmrkx" (OuterVolumeSpecName: "kube-api-access-xmrkx") pod "99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" (UID: "99c7ccd7-5915-4b6d-a14c-92c3bd710d4a"). InnerVolumeSpecName "kube-api-access-xmrkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.472013 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmrkx\" (UniqueName: \"kubernetes.io/projected/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-kube-api-access-xmrkx\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.487814 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" (UID: "99c7ccd7-5915-4b6d-a14c-92c3bd710d4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.574938 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.927690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfm96" event={"ID":"99c7ccd7-5915-4b6d-a14c-92c3bd710d4a","Type":"ContainerDied","Data":"31e6b2ab717c356247055d35c04ad438a7da6260b113e957d4698910a54a7b3c"} Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.927741 4833 scope.go:117] "RemoveContainer" containerID="f74949579f24cd9e48f9f0427225e300254c4ef9cf14cd7850b2625b85f9c54c" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.927863 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfm96" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.970922 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfm96"] Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.976139 4833 scope.go:117] "RemoveContainer" containerID="5949e88849a3cf2acb1f449b80e284cb8b6ea1d8e8d13df9d81e7db29c3146da" Jan 27 15:43:38 crc kubenswrapper[4833]: I0127 15:43:38.979381 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pfm96"] Jan 27 15:43:39 crc kubenswrapper[4833]: I0127 15:43:39.001104 4833 scope.go:117] "RemoveContainer" containerID="ec985c955ba1209d1a01e89c9f8fe63c5451cf2b19e5cd6d944ca5a41cd929c6" Jan 27 15:43:39 crc kubenswrapper[4833]: I0127 15:43:39.239755 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" path="/var/lib/kubelet/pods/99c7ccd7-5915-4b6d-a14c-92c3bd710d4a/volumes" Jan 27 15:44:02 crc kubenswrapper[4833]: I0127 15:44:02.260802 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:44:02 crc kubenswrapper[4833]: I0127 15:44:02.261344 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.956782 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2tvzj"] Jan 27 15:44:26 crc kubenswrapper[4833]: E0127 15:44:26.957772 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="extract-content" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.957789 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="extract-content" Jan 27 15:44:26 crc kubenswrapper[4833]: E0127 15:44:26.957816 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="extract-utilities" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.957825 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="extract-utilities" Jan 27 15:44:26 crc kubenswrapper[4833]: E0127 15:44:26.957844 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="registry-server" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.957851 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="registry-server" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.958093 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c7ccd7-5915-4b6d-a14c-92c3bd710d4a" containerName="registry-server" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.959657 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:26 crc kubenswrapper[4833]: I0127 15:44:26.972636 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tvzj"] Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.044195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-catalog-content\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.044259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-utilities\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.044353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9nds\" (UniqueName: \"kubernetes.io/projected/fecfd35c-af1a-4844-bf3c-162fe40f194f-kube-api-access-q9nds\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.145701 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9nds\" (UniqueName: \"kubernetes.io/projected/fecfd35c-af1a-4844-bf3c-162fe40f194f-kube-api-access-q9nds\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.145871 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-catalog-content\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.145907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-utilities\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.146352 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-utilities\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.146428 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-catalog-content\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.165039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9nds\" (UniqueName: \"kubernetes.io/projected/fecfd35c-af1a-4844-bf3c-162fe40f194f-kube-api-access-q9nds\") pod \"redhat-marketplace-2tvzj\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.291882 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:27 crc kubenswrapper[4833]: I0127 15:44:27.774532 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tvzj"] Jan 27 15:44:27 crc kubenswrapper[4833]: W0127 15:44:27.779744 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfecfd35c_af1a_4844_bf3c_162fe40f194f.slice/crio-0b655d8be46af04f3c035be3d426f2c5e21a841f6d63ec8bf75c8caff73d73f3 WatchSource:0}: Error finding container 0b655d8be46af04f3c035be3d426f2c5e21a841f6d63ec8bf75c8caff73d73f3: Status 404 returned error can't find the container with id 0b655d8be46af04f3c035be3d426f2c5e21a841f6d63ec8bf75c8caff73d73f3 Jan 27 15:44:28 crc kubenswrapper[4833]: I0127 15:44:28.445631 4833 generic.go:334] "Generic (PLEG): container finished" podID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerID="3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59" exitCode=0 Jan 27 15:44:28 crc kubenswrapper[4833]: I0127 15:44:28.445731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerDied","Data":"3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59"} Jan 27 15:44:28 crc kubenswrapper[4833]: I0127 15:44:28.445998 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerStarted","Data":"0b655d8be46af04f3c035be3d426f2c5e21a841f6d63ec8bf75c8caff73d73f3"} Jan 27 15:44:29 crc kubenswrapper[4833]: I0127 15:44:29.455824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerStarted","Data":"7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e"} Jan 27 15:44:30 crc kubenswrapper[4833]: I0127 15:44:30.466054 4833 generic.go:334] "Generic (PLEG): container finished" podID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerID="7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e" exitCode=0 Jan 27 15:44:30 crc kubenswrapper[4833]: I0127 15:44:30.466111 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerDied","Data":"7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e"} Jan 27 15:44:31 crc kubenswrapper[4833]: I0127 15:44:31.480181 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerStarted","Data":"34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00"} Jan 27 15:44:31 crc kubenswrapper[4833]: I0127 15:44:31.506979 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2tvzj" podStartSLOduration=3.096359811 podStartE2EDuration="5.506954953s" podCreationTimestamp="2026-01-27 15:44:26 +0000 UTC" firstStartedPulling="2026-01-27 15:44:28.448670904 +0000 UTC m=+5570.099995306" lastFinishedPulling="2026-01-27 15:44:30.859266046 +0000 UTC m=+5572.510590448" observedRunningTime="2026-01-27 15:44:31.504570675 +0000 UTC m=+5573.155895087" watchObservedRunningTime="2026-01-27 15:44:31.506954953 +0000 UTC m=+5573.158279355" Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.260953 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.261404 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.261565 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.262668 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19fbad5f8e4cbbd62ed98793e52d0479ed3787996719f51e74a8f45e901e9012"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.262818 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://19fbad5f8e4cbbd62ed98793e52d0479ed3787996719f51e74a8f45e901e9012" gracePeriod=600 Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.490537 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="19fbad5f8e4cbbd62ed98793e52d0479ed3787996719f51e74a8f45e901e9012" exitCode=0 Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.490854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"19fbad5f8e4cbbd62ed98793e52d0479ed3787996719f51e74a8f45e901e9012"} Jan 27 15:44:32 crc kubenswrapper[4833]: I0127 15:44:32.490916 4833 scope.go:117] "RemoveContainer" containerID="db89eace03bf61572827a82833b14f9a59ff0143478e54cc1d153bc7519e249d" Jan 27 15:44:33 crc kubenswrapper[4833]: I0127 15:44:33.501626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerStarted","Data":"65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6"} Jan 27 15:44:37 crc kubenswrapper[4833]: I0127 15:44:37.292759 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:37 crc kubenswrapper[4833]: I0127 15:44:37.293424 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:37 crc kubenswrapper[4833]: I0127 15:44:37.347365 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:37 crc kubenswrapper[4833]: I0127 15:44:37.614502 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:37 crc kubenswrapper[4833]: I0127 15:44:37.674839 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tvzj"] Jan 27 15:44:39 crc kubenswrapper[4833]: I0127 15:44:39.584554 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2tvzj" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="registry-server" containerID="cri-o://34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00" gracePeriod=2 Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.085127 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.258266 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-catalog-content\") pod \"fecfd35c-af1a-4844-bf3c-162fe40f194f\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.258594 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-utilities\") pod \"fecfd35c-af1a-4844-bf3c-162fe40f194f\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.258667 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9nds\" (UniqueName: \"kubernetes.io/projected/fecfd35c-af1a-4844-bf3c-162fe40f194f-kube-api-access-q9nds\") pod \"fecfd35c-af1a-4844-bf3c-162fe40f194f\" (UID: \"fecfd35c-af1a-4844-bf3c-162fe40f194f\") " Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.259543 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-utilities" (OuterVolumeSpecName: "utilities") pod "fecfd35c-af1a-4844-bf3c-162fe40f194f" (UID: "fecfd35c-af1a-4844-bf3c-162fe40f194f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.266390 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecfd35c-af1a-4844-bf3c-162fe40f194f-kube-api-access-q9nds" (OuterVolumeSpecName: "kube-api-access-q9nds") pod "fecfd35c-af1a-4844-bf3c-162fe40f194f" (UID: "fecfd35c-af1a-4844-bf3c-162fe40f194f"). InnerVolumeSpecName "kube-api-access-q9nds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.281516 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fecfd35c-af1a-4844-bf3c-162fe40f194f" (UID: "fecfd35c-af1a-4844-bf3c-162fe40f194f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.361969 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.362045 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecfd35c-af1a-4844-bf3c-162fe40f194f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.362079 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9nds\" (UniqueName: \"kubernetes.io/projected/fecfd35c-af1a-4844-bf3c-162fe40f194f-kube-api-access-q9nds\") on node \"crc\" DevicePath \"\"" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.595253 4833 generic.go:334] "Generic (PLEG): container finished" podID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerID="34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00" exitCode=0 Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.595318 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2tvzj" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.595322 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerDied","Data":"34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00"} Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.596590 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2tvzj" event={"ID":"fecfd35c-af1a-4844-bf3c-162fe40f194f","Type":"ContainerDied","Data":"0b655d8be46af04f3c035be3d426f2c5e21a841f6d63ec8bf75c8caff73d73f3"} Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.596651 4833 scope.go:117] "RemoveContainer" containerID="34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.621138 4833 scope.go:117] "RemoveContainer" containerID="7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.634292 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tvzj"] Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.644402 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2tvzj"] Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.660885 4833 scope.go:117] "RemoveContainer" containerID="3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.702138 4833 scope.go:117] "RemoveContainer" containerID="34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00" Jan 27 15:44:40 crc kubenswrapper[4833]: E0127 15:44:40.702640 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00\": container with ID starting with 34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00 not found: ID does not exist" containerID="34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.702671 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00"} err="failed to get container status \"34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00\": rpc error: code = NotFound desc = could not find container \"34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00\": container with ID starting with 34556a771b0d4324a108f003006e876c62af7184bbc8662c00a9867aef6d6f00 not found: ID does not exist" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.702692 4833 scope.go:117] "RemoveContainer" containerID="7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e" Jan 27 15:44:40 crc kubenswrapper[4833]: E0127 15:44:40.702922 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e\": container with ID starting with 7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e not found: ID does not exist" containerID="7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.702945 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e"} err="failed to get container status \"7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e\": rpc error: code = NotFound desc = could not find container \"7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e\": container with ID starting with 7793f031e514f8f9735fb09eff5a49243cea980f70aae11f54a985b8a061b38e not found: ID does not exist" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.702960 4833 scope.go:117] "RemoveContainer" containerID="3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59" Jan 27 15:44:40 crc kubenswrapper[4833]: E0127 15:44:40.703318 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59\": container with ID starting with 3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59 not found: ID does not exist" containerID="3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59" Jan 27 15:44:40 crc kubenswrapper[4833]: I0127 15:44:40.703343 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59"} err="failed to get container status \"3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59\": rpc error: code = NotFound desc = could not find container \"3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59\": container with ID starting with 3c33d4d783a7fae127136b7bc9def982fe130a7c042d0fd1c671ec8aabe0ce59 not found: ID does not exist" Jan 27 15:44:41 crc kubenswrapper[4833]: I0127 15:44:41.223681 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" path="/var/lib/kubelet/pods/fecfd35c-af1a-4844-bf3c-162fe40f194f/volumes" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.147424 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96"] Jan 27 15:45:00 crc kubenswrapper[4833]: E0127 15:45:00.148536 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="extract-content" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.148553 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="extract-content" Jan 27 15:45:00 crc kubenswrapper[4833]: E0127 15:45:00.148599 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.148607 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4833]: E0127 15:45:00.148622 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="extract-utilities" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.148630 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="extract-utilities" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.148898 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecfd35c-af1a-4844-bf3c-162fe40f194f" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.149755 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.151923 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.152773 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.158716 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96"] Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.293787 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b16317c-9c75-428a-a604-87a0482d531d-config-volume\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.294296 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbn24\" (UniqueName: \"kubernetes.io/projected/7b16317c-9c75-428a-a604-87a0482d531d-kube-api-access-tbn24\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.294377 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b16317c-9c75-428a-a604-87a0482d531d-secret-volume\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.396736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b16317c-9c75-428a-a604-87a0482d531d-config-volume\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.396804 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbn24\" (UniqueName: \"kubernetes.io/projected/7b16317c-9c75-428a-a604-87a0482d531d-kube-api-access-tbn24\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.396829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b16317c-9c75-428a-a604-87a0482d531d-secret-volume\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.400434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b16317c-9c75-428a-a604-87a0482d531d-config-volume\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.406039 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b16317c-9c75-428a-a604-87a0482d531d-secret-volume\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.428165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbn24\" (UniqueName: \"kubernetes.io/projected/7b16317c-9c75-428a-a604-87a0482d531d-kube-api-access-tbn24\") pod \"collect-profiles-29492145-bgx96\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.477622 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:00 crc kubenswrapper[4833]: I0127 15:45:00.941245 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96"] Jan 27 15:45:01 crc kubenswrapper[4833]: I0127 15:45:01.783387 4833 generic.go:334] "Generic (PLEG): container finished" podID="7b16317c-9c75-428a-a604-87a0482d531d" containerID="a6df05bace4ac228862c8ca53e1a7bf1e390035d8e2505db51c96b4ebbc46471" exitCode=0 Jan 27 15:45:01 crc kubenswrapper[4833]: I0127 15:45:01.783723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" event={"ID":"7b16317c-9c75-428a-a604-87a0482d531d","Type":"ContainerDied","Data":"a6df05bace4ac228862c8ca53e1a7bf1e390035d8e2505db51c96b4ebbc46471"} Jan 27 15:45:01 crc kubenswrapper[4833]: I0127 15:45:01.783759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" event={"ID":"7b16317c-9c75-428a-a604-87a0482d531d","Type":"ContainerStarted","Data":"5c9a06ae50475546f8f59757075f63857bce756f830ebdc4f014a1e1b111dbcc"} Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.148654 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.163088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b16317c-9c75-428a-a604-87a0482d531d-config-volume\") pod \"7b16317c-9c75-428a-a604-87a0482d531d\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.163140 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbn24\" (UniqueName: \"kubernetes.io/projected/7b16317c-9c75-428a-a604-87a0482d531d-kube-api-access-tbn24\") pod \"7b16317c-9c75-428a-a604-87a0482d531d\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.163186 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b16317c-9c75-428a-a604-87a0482d531d-secret-volume\") pod \"7b16317c-9c75-428a-a604-87a0482d531d\" (UID: \"7b16317c-9c75-428a-a604-87a0482d531d\") " Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.165411 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b16317c-9c75-428a-a604-87a0482d531d-config-volume" (OuterVolumeSpecName: "config-volume") pod "7b16317c-9c75-428a-a604-87a0482d531d" (UID: "7b16317c-9c75-428a-a604-87a0482d531d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.166303 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b16317c-9c75-428a-a604-87a0482d531d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.349747 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b16317c-9c75-428a-a604-87a0482d531d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7b16317c-9c75-428a-a604-87a0482d531d" (UID: "7b16317c-9c75-428a-a604-87a0482d531d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.350223 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b16317c-9c75-428a-a604-87a0482d531d-kube-api-access-tbn24" (OuterVolumeSpecName: "kube-api-access-tbn24") pod "7b16317c-9c75-428a-a604-87a0482d531d" (UID: "7b16317c-9c75-428a-a604-87a0482d531d"). InnerVolumeSpecName "kube-api-access-tbn24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.372382 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbn24\" (UniqueName: \"kubernetes.io/projected/7b16317c-9c75-428a-a604-87a0482d531d-kube-api-access-tbn24\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.372422 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b16317c-9c75-428a-a604-87a0482d531d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.806680 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" event={"ID":"7b16317c-9c75-428a-a604-87a0482d531d","Type":"ContainerDied","Data":"5c9a06ae50475546f8f59757075f63857bce756f830ebdc4f014a1e1b111dbcc"} Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.806991 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9a06ae50475546f8f59757075f63857bce756f830ebdc4f014a1e1b111dbcc" Jan 27 15:45:03 crc kubenswrapper[4833]: I0127 15:45:03.806724 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-bgx96" Jan 27 15:45:04 crc kubenswrapper[4833]: I0127 15:45:04.224548 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj"] Jan 27 15:45:04 crc kubenswrapper[4833]: I0127 15:45:04.233278 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-nzqrj"] Jan 27 15:45:05 crc kubenswrapper[4833]: I0127 15:45:05.224968 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d2f66f-a552-4e92-8270-db03275d821f" path="/var/lib/kubelet/pods/b0d2f66f-a552-4e92-8270-db03275d821f/volumes" Jan 27 15:45:36 crc kubenswrapper[4833]: I0127 15:45:36.124090 4833 generic.go:334] "Generic (PLEG): container finished" podID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerID="36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f" exitCode=0 Jan 27 15:45:36 crc kubenswrapper[4833]: I0127 15:45:36.124164 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" event={"ID":"ba1dd023-060e-43f3-80ab-c85ce4f45b63","Type":"ContainerDied","Data":"36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f"} Jan 27 15:45:36 crc kubenswrapper[4833]: I0127 15:45:36.126158 4833 scope.go:117] "RemoveContainer" containerID="36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f" Jan 27 15:45:36 crc kubenswrapper[4833]: I0127 15:45:36.518342 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jwx6p_must-gather-vpbbz_ba1dd023-060e-43f3-80ab-c85ce4f45b63/gather/0.log" Jan 27 15:45:44 crc kubenswrapper[4833]: I0127 15:45:44.471837 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jwx6p/must-gather-vpbbz"] Jan 27 15:45:44 crc kubenswrapper[4833]: I0127 15:45:44.472548 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="copy" containerID="cri-o://0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd" gracePeriod=2 Jan 27 15:45:44 crc kubenswrapper[4833]: I0127 15:45:44.482282 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jwx6p/must-gather-vpbbz"] Jan 27 15:45:44 crc kubenswrapper[4833]: I0127 15:45:44.939846 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jwx6p_must-gather-vpbbz_ba1dd023-060e-43f3-80ab-c85ce4f45b63/copy/0.log" Jan 27 15:45:44 crc kubenswrapper[4833]: I0127 15:45:44.940479 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.034186 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd023-060e-43f3-80ab-c85ce4f45b63-must-gather-output\") pod \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.034237 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67zf\" (UniqueName: \"kubernetes.io/projected/ba1dd023-060e-43f3-80ab-c85ce4f45b63-kube-api-access-f67zf\") pod \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\" (UID: \"ba1dd023-060e-43f3-80ab-c85ce4f45b63\") " Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.044435 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1dd023-060e-43f3-80ab-c85ce4f45b63-kube-api-access-f67zf" (OuterVolumeSpecName: "kube-api-access-f67zf") pod "ba1dd023-060e-43f3-80ab-c85ce4f45b63" (UID: "ba1dd023-060e-43f3-80ab-c85ce4f45b63"). InnerVolumeSpecName "kube-api-access-f67zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.136630 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67zf\" (UniqueName: \"kubernetes.io/projected/ba1dd023-060e-43f3-80ab-c85ce4f45b63-kube-api-access-f67zf\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.215596 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jwx6p_must-gather-vpbbz_ba1dd023-060e-43f3-80ab-c85ce4f45b63/copy/0.log" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.217745 4833 generic.go:334] "Generic (PLEG): container finished" podID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerID="0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd" exitCode=143 Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.217821 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jwx6p/must-gather-vpbbz" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.234034 4833 scope.go:117] "RemoveContainer" containerID="0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.235768 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1dd023-060e-43f3-80ab-c85ce4f45b63-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ba1dd023-060e-43f3-80ab-c85ce4f45b63" (UID: "ba1dd023-060e-43f3-80ab-c85ce4f45b63"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.239356 4833 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ba1dd023-060e-43f3-80ab-c85ce4f45b63-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.265423 4833 scope.go:117] "RemoveContainer" containerID="36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.324543 4833 scope.go:117] "RemoveContainer" containerID="0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd" Jan 27 15:45:45 crc kubenswrapper[4833]: E0127 15:45:45.325097 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd\": container with ID starting with 0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd not found: ID does not exist" containerID="0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.325143 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd"} err="failed to get container status \"0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd\": rpc error: code = NotFound desc = could not find container \"0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd\": container with ID starting with 0c59d682c5f07c60e6893d4ee7830dece73a5c61bb7608d102af9cbc87b2c8fd not found: ID does not exist" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.325179 4833 scope.go:117] "RemoveContainer" containerID="36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f" Jan 27 15:45:45 crc kubenswrapper[4833]: E0127 15:45:45.325672 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f\": container with ID starting with 36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f not found: ID does not exist" containerID="36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f" Jan 27 15:45:45 crc kubenswrapper[4833]: I0127 15:45:45.325706 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f"} err="failed to get container status \"36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f\": rpc error: code = NotFound desc = could not find container \"36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f\": container with ID starting with 36e03601ac88e45a37b2b2009dde8bf1c8dd23431b0336aeeab6bde63451a89f not found: ID does not exist" Jan 27 15:45:47 crc kubenswrapper[4833]: I0127 15:45:47.222974 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" path="/var/lib/kubelet/pods/ba1dd023-060e-43f3-80ab-c85ce4f45b63/volumes" Jan 27 15:45:57 crc kubenswrapper[4833]: I0127 15:45:57.701516 4833 scope.go:117] "RemoveContainer" containerID="4d85fba9714ebb36c5535cb56d46eeae3ffe3844b3775eead3b73dd5c5a4a7ac" Jan 27 15:45:57 crc kubenswrapper[4833]: I0127 15:45:57.727777 4833 scope.go:117] "RemoveContainer" containerID="2b205fecf33fdaa4a2a3990c059f4f0079c2a6d19b3b45f9c1f62462f8acc9c5" Jan 27 15:45:57 crc kubenswrapper[4833]: I0127 15:45:57.789766 4833 scope.go:117] "RemoveContainer" containerID="0123911d1436ec6a1e7ec65841c209b6b052cda1334e787b857dc9d5b6bef0ad" Jan 27 15:46:32 crc kubenswrapper[4833]: I0127 15:46:32.260961 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:46:32 crc kubenswrapper[4833]: I0127 15:46:32.261395 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:46:57 crc kubenswrapper[4833]: I0127 15:46:57.881076 4833 scope.go:117] "RemoveContainer" containerID="6ebf875a475bdf5d4ee132a8d7abb37636f6cd53c59153a4dfc474f90a7af6fd" Jan 27 15:46:57 crc kubenswrapper[4833]: I0127 15:46:57.904400 4833 scope.go:117] "RemoveContainer" containerID="17933e07924efdf593de31797ab32cf393207f2021b1e5cef482394b8fc61364" Jan 27 15:47:02 crc kubenswrapper[4833]: I0127 15:47:02.261248 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:47:02 crc kubenswrapper[4833]: I0127 15:47:02.263400 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.261224 4833 patch_prober.go:28] interesting pod/machine-config-daemon-mcx7z container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.261881 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.261942 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.262810 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6"} pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.262862 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerName="machine-config-daemon" containerID="cri-o://65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" gracePeriod=600 Jan 27 15:47:32 crc kubenswrapper[4833]: E0127 15:47:32.399820 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.965128 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" exitCode=0 Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.965186 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" event={"ID":"cd82cea5-8cab-4c03-b640-2b4d45ba7e53","Type":"ContainerDied","Data":"65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6"} Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.965236 4833 scope.go:117] "RemoveContainer" containerID="19fbad5f8e4cbbd62ed98793e52d0479ed3787996719f51e74a8f45e901e9012" Jan 27 15:47:32 crc kubenswrapper[4833]: I0127 15:47:32.967583 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:47:32 crc kubenswrapper[4833]: E0127 15:47:32.968219 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:47:45 crc kubenswrapper[4833]: I0127 15:47:45.211538 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:47:45 crc kubenswrapper[4833]: E0127 15:47:45.212344 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:47:56 crc kubenswrapper[4833]: I0127 15:47:56.210867 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:47:56 crc kubenswrapper[4833]: E0127 15:47:56.211768 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:48:08 crc kubenswrapper[4833]: I0127 15:48:08.210299 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:48:08 crc kubenswrapper[4833]: E0127 15:48:08.211257 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:48:19 crc kubenswrapper[4833]: I0127 15:48:19.220495 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:48:19 crc kubenswrapper[4833]: E0127 15:48:19.221261 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:48:33 crc kubenswrapper[4833]: I0127 15:48:33.210685 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:48:33 crc kubenswrapper[4833]: E0127 15:48:33.211549 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:48:46 crc kubenswrapper[4833]: I0127 15:48:46.211390 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:48:46 crc kubenswrapper[4833]: E0127 15:48:46.212720 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:49:01 crc kubenswrapper[4833]: I0127 15:49:01.210760 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:49:01 crc kubenswrapper[4833]: E0127 15:49:01.226896 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:49:16 crc kubenswrapper[4833]: I0127 15:49:16.210843 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:49:16 crc kubenswrapper[4833]: E0127 15:49:16.212222 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:49:27 crc kubenswrapper[4833]: I0127 15:49:27.210924 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:49:27 crc kubenswrapper[4833]: E0127 15:49:27.212036 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:49:39 crc kubenswrapper[4833]: I0127 15:49:39.220539 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:49:39 crc kubenswrapper[4833]: E0127 15:49:39.221783 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:49:54 crc kubenswrapper[4833]: I0127 15:49:54.210267 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:49:54 crc kubenswrapper[4833]: E0127 15:49:54.211195 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:50:08 crc kubenswrapper[4833]: I0127 15:50:08.211285 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:50:08 crc kubenswrapper[4833]: E0127 15:50:08.211978 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.226068 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:50:19 crc kubenswrapper[4833]: E0127 15:50:19.227131 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.920715 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8pn9n"] Jan 27 15:50:19 crc kubenswrapper[4833]: E0127 15:50:19.922607 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="gather" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.922695 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="gather" Jan 27 15:50:19 crc kubenswrapper[4833]: E0127 15:50:19.922782 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="copy" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.922845 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="copy" Jan 27 15:50:19 crc kubenswrapper[4833]: E0127 15:50:19.922912 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b16317c-9c75-428a-a604-87a0482d531d" containerName="collect-profiles" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.922971 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b16317c-9c75-428a-a604-87a0482d531d" containerName="collect-profiles" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.923229 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="copy" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.923310 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1dd023-060e-43f3-80ab-c85ce4f45b63" containerName="gather" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.923398 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b16317c-9c75-428a-a604-87a0482d531d" containerName="collect-profiles" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.924957 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:19 crc kubenswrapper[4833]: I0127 15:50:19.958953 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pn9n"] Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.009748 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-utilities\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.009873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-catalog-content\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.009922 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsn8h\" (UniqueName: \"kubernetes.io/projected/50dc7437-13e1-46b7-8f59-770a255b7f5f-kube-api-access-tsn8h\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.120603 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-catalog-content\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.120702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsn8h\" (UniqueName: \"kubernetes.io/projected/50dc7437-13e1-46b7-8f59-770a255b7f5f-kube-api-access-tsn8h\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.120786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-utilities\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.121155 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-catalog-content\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.121220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-utilities\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.145052 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsn8h\" (UniqueName: \"kubernetes.io/projected/50dc7437-13e1-46b7-8f59-770a255b7f5f-kube-api-access-tsn8h\") pod \"certified-operators-8pn9n\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.253247 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:20 crc kubenswrapper[4833]: I0127 15:50:20.772885 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pn9n"] Jan 27 15:50:21 crc kubenswrapper[4833]: I0127 15:50:21.593805 4833 generic.go:334] "Generic (PLEG): container finished" podID="50dc7437-13e1-46b7-8f59-770a255b7f5f" containerID="86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b" exitCode=0 Jan 27 15:50:21 crc kubenswrapper[4833]: I0127 15:50:21.594027 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerDied","Data":"86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b"} Jan 27 15:50:21 crc kubenswrapper[4833]: I0127 15:50:21.594107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerStarted","Data":"131cc089968ddfabe9be3e6cfda48dc693ba133b0460dd49ae62d5aba9ab40ac"} Jan 27 15:50:21 crc kubenswrapper[4833]: I0127 15:50:21.596045 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:50:22 crc kubenswrapper[4833]: I0127 15:50:22.605503 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerStarted","Data":"942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1"} Jan 27 15:50:23 crc kubenswrapper[4833]: I0127 15:50:23.618852 4833 generic.go:334] "Generic (PLEG): container finished" podID="50dc7437-13e1-46b7-8f59-770a255b7f5f" containerID="942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1" exitCode=0 Jan 27 15:50:23 crc kubenswrapper[4833]: I0127 15:50:23.618923 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerDied","Data":"942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1"} Jan 27 15:50:24 crc kubenswrapper[4833]: I0127 15:50:24.629233 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerStarted","Data":"010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c"} Jan 27 15:50:24 crc kubenswrapper[4833]: I0127 15:50:24.653125 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8pn9n" podStartSLOduration=3.220126093 podStartE2EDuration="5.653106829s" podCreationTimestamp="2026-01-27 15:50:19 +0000 UTC" firstStartedPulling="2026-01-27 15:50:21.595810647 +0000 UTC m=+5923.247135039" lastFinishedPulling="2026-01-27 15:50:24.028791373 +0000 UTC m=+5925.680115775" observedRunningTime="2026-01-27 15:50:24.64505666 +0000 UTC m=+5926.296381072" watchObservedRunningTime="2026-01-27 15:50:24.653106829 +0000 UTC m=+5926.304431232" Jan 27 15:50:30 crc kubenswrapper[4833]: I0127 15:50:30.253620 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:30 crc kubenswrapper[4833]: I0127 15:50:30.254490 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:30 crc kubenswrapper[4833]: I0127 15:50:30.321689 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:30 crc kubenswrapper[4833]: I0127 15:50:30.763403 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:30 crc kubenswrapper[4833]: I0127 15:50:30.827967 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pn9n"] Jan 27 15:50:32 crc kubenswrapper[4833]: I0127 15:50:32.706715 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8pn9n" podUID="50dc7437-13e1-46b7-8f59-770a255b7f5f" containerName="registry-server" containerID="cri-o://010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c" gracePeriod=2 Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.172506 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.216434 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:50:33 crc kubenswrapper[4833]: E0127 15:50:33.216659 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.304086 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-utilities\") pod \"50dc7437-13e1-46b7-8f59-770a255b7f5f\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.304317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsn8h\" (UniqueName: \"kubernetes.io/projected/50dc7437-13e1-46b7-8f59-770a255b7f5f-kube-api-access-tsn8h\") pod \"50dc7437-13e1-46b7-8f59-770a255b7f5f\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.304359 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-catalog-content\") pod \"50dc7437-13e1-46b7-8f59-770a255b7f5f\" (UID: \"50dc7437-13e1-46b7-8f59-770a255b7f5f\") " Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.307279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-utilities" (OuterVolumeSpecName: "utilities") pod "50dc7437-13e1-46b7-8f59-770a255b7f5f" (UID: "50dc7437-13e1-46b7-8f59-770a255b7f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.312198 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50dc7437-13e1-46b7-8f59-770a255b7f5f-kube-api-access-tsn8h" (OuterVolumeSpecName: "kube-api-access-tsn8h") pod "50dc7437-13e1-46b7-8f59-770a255b7f5f" (UID: "50dc7437-13e1-46b7-8f59-770a255b7f5f"). InnerVolumeSpecName "kube-api-access-tsn8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.409249 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsn8h\" (UniqueName: \"kubernetes.io/projected/50dc7437-13e1-46b7-8f59-770a255b7f5f-kube-api-access-tsn8h\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.409279 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.719210 4833 generic.go:334] "Generic (PLEG): container finished" podID="50dc7437-13e1-46b7-8f59-770a255b7f5f" containerID="010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c" exitCode=0 Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.719268 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerDied","Data":"010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c"} Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.719297 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pn9n" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.719922 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pn9n" event={"ID":"50dc7437-13e1-46b7-8f59-770a255b7f5f","Type":"ContainerDied","Data":"131cc089968ddfabe9be3e6cfda48dc693ba133b0460dd49ae62d5aba9ab40ac"} Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.720077 4833 scope.go:117] "RemoveContainer" containerID="010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.737928 4833 scope.go:117] "RemoveContainer" containerID="942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.760316 4833 scope.go:117] "RemoveContainer" containerID="86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.763179 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50dc7437-13e1-46b7-8f59-770a255b7f5f" (UID: "50dc7437-13e1-46b7-8f59-770a255b7f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.815727 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50dc7437-13e1-46b7-8f59-770a255b7f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.861696 4833 scope.go:117] "RemoveContainer" containerID="010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c" Jan 27 15:50:33 crc kubenswrapper[4833]: E0127 15:50:33.862458 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c\": container with ID starting with 010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c not found: ID does not exist" containerID="010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.862523 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c"} err="failed to get container status \"010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c\": rpc error: code = NotFound desc = could not find container \"010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c\": container with ID starting with 010097506f84e07d81f2a1a3deadad007607b0dda2ef293589b43e1485a27f2c not found: ID does not exist" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.862566 4833 scope.go:117] "RemoveContainer" containerID="942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1" Jan 27 15:50:33 crc kubenswrapper[4833]: E0127 15:50:33.863395 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1\": container with ID starting with 942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1 not found: ID does not exist" containerID="942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.863466 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1"} err="failed to get container status \"942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1\": rpc error: code = NotFound desc = could not find container \"942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1\": container with ID starting with 942711761ae286b5e926e4c3a0128eeea4350cafaa3f74c747cc58c2a46625d1 not found: ID does not exist" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.863526 4833 scope.go:117] "RemoveContainer" containerID="86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b" Jan 27 15:50:33 crc kubenswrapper[4833]: E0127 15:50:33.864034 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b\": container with ID starting with 86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b not found: ID does not exist" containerID="86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b" Jan 27 15:50:33 crc kubenswrapper[4833]: I0127 15:50:33.864074 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b"} err="failed to get container status \"86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b\": rpc error: code = NotFound desc = could not find container \"86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b\": container with ID starting with 86bce6579ad888fbcb1e3ccb3ba34e1c6e019375022c519edbbda4c4fdba8f4b not found: ID does not exist" Jan 27 15:50:34 crc kubenswrapper[4833]: I0127 15:50:34.073485 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pn9n"] Jan 27 15:50:34 crc kubenswrapper[4833]: I0127 15:50:34.081971 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8pn9n"] Jan 27 15:50:35 crc kubenswrapper[4833]: I0127 15:50:35.225721 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50dc7437-13e1-46b7-8f59-770a255b7f5f" path="/var/lib/kubelet/pods/50dc7437-13e1-46b7-8f59-770a255b7f5f/volumes" Jan 27 15:50:44 crc kubenswrapper[4833]: I0127 15:50:44.211117 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:50:44 crc kubenswrapper[4833]: E0127 15:50:44.211872 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:50:58 crc kubenswrapper[4833]: I0127 15:50:58.211123 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:50:58 crc kubenswrapper[4833]: E0127 15:50:58.211933 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:51:12 crc kubenswrapper[4833]: I0127 15:51:12.211480 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:51:12 crc kubenswrapper[4833]: E0127 15:51:12.212312 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:51:24 crc kubenswrapper[4833]: I0127 15:51:24.211785 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:51:24 crc kubenswrapper[4833]: E0127 15:51:24.212649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:51:37 crc kubenswrapper[4833]: I0127 15:51:37.210693 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:51:37 crc kubenswrapper[4833]: E0127 15:51:37.211386 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53" Jan 27 15:51:51 crc kubenswrapper[4833]: I0127 15:51:51.211359 4833 scope.go:117] "RemoveContainer" containerID="65e99bd599837b4b230d42cc07bd1641276b681e6fd2e4f9207d07b6416851e6" Jan 27 15:51:51 crc kubenswrapper[4833]: E0127 15:51:51.212126 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mcx7z_openshift-machine-config-operator(cd82cea5-8cab-4c03-b640-2b4d45ba7e53)\"" pod="openshift-machine-config-operator/machine-config-daemon-mcx7z" podUID="cd82cea5-8cab-4c03-b640-2b4d45ba7e53"